Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-05T23:46:29.782Z Has data issue: false hasContentIssue false

AUTOMATED INFERENCE AND LEARNING IN MODELING FINANCIAL VOLATILITY

Published online by Cambridge University Press:  08 February 2005

Michael McAleer
Affiliation:
University of Western Australia
Rights & Permissions [Opens in a new window]

Abstract

This paper uses the specific-to-general methodological approach that is widely used in science, in which problems with existing theories are resolved as the need arises, to illustrate a number of important developments in the modeling of univariate and multivariate financial volatility. Some of the difficulties in analyzing time-varying univariate and multivariate conditional volatility and stochastic volatility include the number of parameters to be estimated and the computational complexities associated with multivariate conditional volatility models and both univariate and multivariate stochastic volatility models. For these reasons, among others, automated inference in its present state requires modifications and extensions for modeling in empirical financial econometrics. As a contribution to the development of automated inference in modeling volatility, 20 important issues in the specification, estimation, and testing of conditional and stochastic volatility models are discussed. A “potential for automation rating” (PAR) index and recommendations regarding the possibilities for automated inference in modeling financial volatility are given in each case.

Type
Research Article
Copyright
© 2005 Cambridge University Press

A person cannot possibly seek what he knows, and, just as impossibly, he cannot seek what he does not know, for what he knows he cannot seek, since he knows it, and what he does not know he cannot seek because, after all, he does not even know what he is supposed to seek.

 —Søren Kierkegaard (1813–1855), Philosophical Fragments

As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know.

 —Donald Rumsfeld, Department of Defense news briefing (transcript), February 12, 2002

“What do you want to know?”

“Things that you don't know you know.”

 —The Tailor of Panama (film)

1. INTRODUCTION

Three important components in econometric modeling are specification, estimation, and testing. In respect to testing, McAleer (1994, p. 334) has echoed a common concern regarding the development of what might be perceived as an excessive number of formal statistical tests, as follows:

A test that is never used has zero power.

Although the statement is clearly not intended to be technically correct in respect to statistical theory, it nevertheless captures the feeling that a technique that is not used may be of little value. A less controversial statement would seem to be the following:

An automated method of inference that is never used has zero value.

These two statements, and also their corollaries relating to the infrequent use of certain tests and modeling strategies, would seem, prima facie, to be sufficiently self-evident as to form an implicit code for empirical econometrics. In fact, as such statements are frequently ignored in practice, they might be seen more as guidelines than regulations. Econometric modeling techniques have been developed for purposes of being used in practice. When theory is developed in the absence of application, it might be seen as being of limited, if not zero, applied value. However, automated methods of inference are designed to be used, even though they may suffer from a multitude of sins, including bias and inefficiency in estimation and pretest bias in inference. This paper is concerned with automated methods of inference in modeling financial volatility.

Regardless of how they might be perceived, these statements relate to an important aspect of econometric modeling, namely, testing, and also to any automated modeling strategy that is based on estimation and testing. Although econometric estimation and testing have had a long and sometimes controversial history in econometrics, these issues are not of primary concern in this paper. Estimation and testing are presented to discuss various aspects of automated inference and learning in modeling financial volatility.

Echoing the sentiments of Donald Rumsfeld, which are a colloquial rendition of the much earlier thoughts of the Danish philosopher Søren Kierkegaard on summarizing Plato's theory of recollection, if we ignore what we do not know we know, as in The Tailor of Panama, and what we do not know we do not know, the important remaining question is: What do we know we know, and what do we know we do not know, about automated inference in empirical financial econometrics?

The dialogue concerning the general-to-specific (or Gets) modeling approach by Granger and Hendry (this issue) is concerned with a new tool for econometric modeling. As compared with the specific-to-general (or Stog) approach that is more standard in science (for a discussion of these two approaches, see, e.g., Keuzenkamp and McAleer, 1995), Gets would seem to be particularly useful for modeling the conditional mean using stationary and nonstationary time series data. Indeed, regardless of the reasons for which it was developed, Gets has primarily been used for this purpose in econometric practice.

One of the primary purposes of this paper is to illustrate aspects of the Stog methodological approach in the context of modeling financial volatility from 1982, the year in which the seminal work of Engle (1982) on autoregressive conditional heteroskedasticity (ARCH) was published, to the present. This literature has typically been concerned with developing and estimating models, even before their structural properties, including the analysis of regularity and moment conditions and the statistical properties of the estimators, have been established. A critical aspect of the financial volatility literature is that problems with existing theories are resolved through finding solutions as the need arises. This Stog approach is entirely consistent with scientific progress.

Many of the problems that are considered in this paper either do not arise or have not (yet) been considered in the Gets approach. This is not intended as a specific criticism of Gets but rather as an indication of the empirical models that might not be encountered by users of any such program in practice. As an illustration, the following words that are widely used in analyzing financial data do not appear in the dialogue in Granger and Hendry (this issue): aggregate, ARCH, asymmetric, asymmetry, Brownian motion, clustering, conditional likelihood, continuous, diffusion, double mover, exponential, extreme, factors, fat tails, fractional, GARCH, high frequency, jumps, knowledge, kurtosis, LAD, latent, learning, leptokurtosis, leverage, long memory, MCL, MCMC, Monte Carlo likelihood, multivariate, nonnormality, non-normality, nonparametric, non-parametric, numerical likelihood, outliers, periodic, persistence, resampling, re-sampling, returns, reversion, sampler, sample splitting, seasonality, semiparametric, semi-parametric, single mover, skewness, spurious, stochastic, stochastic volatility, SV, threshold, transition, ultra-high frequency, value-at-risk, VaR, Wiener process, and volatility; and there is only one reference to each of the following words: moments (in the context of instrumental variables [IV] estimation in question 1) and risk (in question 13). These important aspects of modeling financial data do not (yet) seem to have been accommodated in the Gets approach, although they certainly could be.

It might be argued that the number of variations of financial volatility models is greater than for the various distributed lag, cointegration, and error (or equilibrium) correction models (ECM) that are available in the econometric time series literature. An ECM is typically a time series model of stationary and nonstationary variables with different data transformations, alternative functional forms, and lag structures. It could also accommodate threshold effects, latent variables, outliers and extreme observations, and conditional and stochastic higher order moments, among other extensions, but these have generally not been considered worthwhile in practice. In many cases, a linear ECM with a logarithmic transformation is favored, with the lag structure being a key consideration in the specification. This is in marked contrast to financial volatility models, in which the lag structures of both the conditional mean and volatility seem to be unimportant relative to other problems that arise in practice.

The number of parameters to be estimated, and the computational difficulties associated with estimating multivariate conditional volatility models and both univariate and multivariate stochastic volatility models, are fundamental to the determination of a specific functional form for volatility. In practice, the dynamic specifications of both generalized ARCH (GARCH) and stochastic volatility (SV) models are frequently determined prior to estimation. Therefore, modifications and adaptations would seem to be required for automated model selection techniques in modeling financial volatility, especially in comparison with the computational difficulties that presently exist.

Knowledge accumulation and methodology in the philosophy of science have a long history. Dharmapala and McAleer (1996) discuss econometric methodology in the philosophy of science in terms of the traditional, instrumentalist and Popperian falsificationist approaches. The critical methodological analyzes of various papers in Zellner, Keuzenkamp, and McAleer (2001) provide alternative perspectives on the philosophy of science with regard to simplicity, inference, and modeling.

As the literature related to modeling financial volatility is broad and ever increasing, some limits need to be placed on what can be discussed in the paper. For this reason, models associated with temporal aggregation, factors, orthogonalization, long memory, fractional integration, periodicity, transitions, thresholds, high frequency data, and nonparametric analysis will not be considered. Techniques for automated inference will undoubtedly also be useful for each of these topics, but this will have to be left for another forum.

The plan of the remainder of the paper is as follows. Section 2 discusses some aspects of knowledge and wisdom and whether automated procedures assist in the acquisition of knowledge. Various univariate and multivariate conditional and SV models are presented in Section 3 according to the practical economic and financial issues which they were intended to address. As a contribution to the development of automated methods of inference in modeling financial volatility, 20 important issues in the specification, estimation, and testing of modeling univariate and multivariate GARCH and SV are discussed in Section 4. A potential for automation rating (PAR) index and recommendations regarding the possibilities for automated inference in modeling financial volatility are given in each case. Some concluding comments are given in Section 5.

2. KNOWLEDGE AND LEARNING

Real knowledge is to know the extent of one's ignorance.

—Confucius, 551 BC–479 BC

Imagination is more important than knowledge.

—Albert Einstein

I taught them everything they know, but not everything I know.

—James Brown

It would be ironic if, inside the Wharton model, we found in the end, Lawrence Klein.

—Paul Samuelson

What you get with Gets is Hendry. What you would like to emulate with the volatility version is Engle.

—Adrian Pagan

Your born intelligencer is the man who knows what he is looking for before he finds it.

—John le Carre, The Tailor of Panama

Model selection is an application of a set of rules, so it can be applied to any empirical problem after the set of rules has been determined. It would, of course, be possible to develop an automated modeling strategy for financial volatility by incorporating a set of rules that would be pertinent for practitioners in finance.

However, automated inference should not be confused with automated learning, as one does not imply the other. Confucius stated that a realization of one's ignorance was necessary to gain knowledge. Albert Einstein's dictum about imagination versus knowledge has been cited widely, almost as much as his famous remark about inspiration versus perspiration. The quote from James Brown makes it clear that the student is rarely as well informed as the teacher. This is almost certainly the case with most users of the Gets program, especially in comparison with its masterful developer. This is not at all surprising as, in this case, the master is acknowledged to be a genuine expert in the field.

Long before the invention of IBM's “Deep Blue” chess-playing machine, Paul Samuelson (1975, p. 8) told the story that “There used to be a marvelous chess machine that could beat all comers. But alas it turned out that curled inside the machine was an actual man.” His quote suggests that the Wharton model was heavily influenced by Lawrence Klein. Adrian Pagan suggests that using the Gets package might be a relatively inexpensive way of hiring David Hendry as an automated consultant for modeling the conditional mean using econometric time series data. A volatility version of the Gets software package might then be an inexpensive way of hiring Rob Engle as an automated financial consultant. These sentiments would likely be shared by many in the profession. As there is presently no volatility version of Gets, one of the purposes of this paper is to provide some broad directions as to how this might be accomplished. It will be left to others to implement the design. A related approach has been considered by Hansen, Lunde, and Nason (2003), who use the model confidence set approach to choose the best univariate volatility model.

Education is intricately intertwined with knowledge, wisdom, imagination, and learning. We learn from our mistakes, sometimes agonizingly slowly, but we learn nonetheless. It is not clear how automated methods of modeling can teach us how to learn from mistakes in learning. Nevertheless, as in the case of the Tailor of Panama, born intelligencers apparently know what they want even before they find it. Although this may well be true for the developers of macroeconometric models and econometric software packages, it is less likely to hold for their disciples. Therefore, a requirement for automated methods of inference and modeling should be a clear statement of how a user is to learn from the mistakes that are likely to be made in the process of determining an optimal model. Some of these issues are considered in Section 4.

3. UNIVARIATE AND MULTIVARIATE CONDITIONAL AND STOCHASTIC VOLATILITY MODELS

A favorable comment can increase happiness momentarily, but a negative comment can last forever. That is asymmetry.

The concept of Value-at-Risk (VaR) is widely used in the finance industry to determine the maximum loss over a specified time horizon with a given level of probability. It is possible to calculate the numerical values of VaR directly. However, as VaR is typically dependent upon the specific models determined for GARCH and SV, and also the distributions of the underlying shocks, it is sensible to estimate GARCH and SV models as important components in the determination of the VaR thresholds. It is, therefore, possible to capture the evolution of the underlying distributions using a combination of GARCH and SV models, and also through direct time series modeling of VaR thresholds. As the problems arising from modeling univariate and multivariate GARCH and SV volatility models will affect the VaR thresholds, it is important to understand how frequently arising problems can affect the estimation and analysis of the GARCH and SV models.

A range of multivariate models can be used for asset allocation, portfolio risk evaluation, and dynamic portfolio analysis, and it is possible to determine which are optimal under different econometric and financial scenarios. For a portfolio of assets, it may be appropriate to model covariances and conditional covariances to determine VaR thresholds, but static and dynamic correlations and conditional correlations are typically more important in making decisions relating to pairs of assets in the optimal design of a portfolio.

This section provides a critical analysis of the development of some important univariate and multivariate GARCH and SV models of financial volatility, including the specification of the models. The structural properties of the models and the statistical properties of the estimators are summarized in Section 4. Several of these models are already available in standard econometric software packages such as RATS and EViews. Twenty important issues in the specification, estimation, and testing of conditional and SV models are discussed. A PAR index and recommendations regarding the possibilities for automated inference in modeling financial volatility are given in each case. The following sections are intended to show the substantial scope that is available for numerous topics of interesting research in the area of modeling financial volatility.

3.1. Constant Conditional Correlation GARCH Models

The typical specifications underlying the multivariate conditional mean and conditional variance in returns are given as follows:

where yt = (y1t,…,ymt)′, ηt = (η1t,…,ηmt)′ is a sequence of independently and identically distributed (i.i.d.) random vectors, Ft is the past information available to time t, Dt = diag(h1t1/2,…,hmt1/2), m is number of returns, and t = 1,…,n. The constant conditional correlation (CCC) model of Bollerslev (1990) assumes that the conditional variance for each return, hit, i = 1,…,m, follows a univariate GARCH process (see Engle, 1982; Bollerslev, 1986), that is,

where αij represents the ARCH effects, or the short-run persistence of shocks to return i, and βij represents the GARCH effects, or the contribution of shocks to return i to long-run persistence, namely,

The conditional correlation matrix of CCC is Γ = Etηt′|Ft−1) = Etηt′), where Γ = {ρij} for i, j = 1,…,m. From (1), εtεt′ = DtηtηtDt, Dt = (diag Qt)1/2, and Etεt′|Ft−1) = Qt = DtΓDt, where Qt is the conditional covariance matrix. The conditional correlation matrix is defined as Γ = Dt−1Qt Dt−1, and each conditional correlation coefficient is estimated from the standardized residuals in (1) and (2). As such, there is no multivariate estimation involved for CCC, except in the calculation of the conditional correlations.

Although the CCC specification in (2) is a computationally straightforward multivariate GARCH model, it assumes independence of the conditional variances across returns and does not accommodate asymmetric behavior. To accommodate interdependencies, Ling and McAleer (2003) proposed a vector autoregressive moving average (VARMA) specification of the conditional mean in (1) and the following specification for the conditional variance:

where

, and W, Ai for i = 1,…,r and Bj for i = 1,…,s are m × m matrices. As in the univariate GARCH model, VARMA-GARCH assumes that negative and positive shocks have identical impacts on the conditional variance. To accommodate the asymmetric impacts of positive and negative shocks, Chan, Hoti, and McAleer (2002) proposed the VARMA-AGARCH specification for the conditional variance, namely,

where Ci are m × m matrices for i = 1,…,r, and It = diag(I1t,…,Imt), where

If m = 1, (4) collapses to the asymmetric GARCH, or GJR, model of Glosten, Jagannathan, and Runkle (1992).

The parameters of models (1)–(4) are typically obtained by maximum likelihood estimation (MLE) using a joint normal density, namely,

where θ denotes the vector of parameters to be estimated in the conditional log-likelihood function and |Qt| denotes the determinant of Qt. When ηt does not follow a joint multivariate normal distribution, equation (5) is defined as the quasi-MLE (QMLE).

When the number of returns is m = 1, the univariate equivalent of (1) becomes GARCH(1,1), as follows:

where ω > 0, αj ≥ 0 for j = 1,…,r and βj ≥ 0 for j = 1,…,s are sufficient to ensure that the conditional variance ht > 0. As (6) can be expanded as an infinite expansion in εtj2, a univariate GARCH(1,1) model is also known as an ARCH(∞) model.

Equation (6) assumes that a positive shock (εt > 0) has the same impact on the conditional variance, ht, as a negative shock (εt < 0). To accommodate differential impacts on the conditional variance between positive and negative shocks, Glosten et al. (1992) proposed the following specification for ht:

which is a special case of (4). When r = s = 1, ω > 0, α1 ≥ 0, α1 + γ1 ≥ 0, and β1 ≥ 0 are sufficient conditions to ensure that the conditional variance ht > 0. The short-run persistence of positive (negative) shocks is given by α11 + γ1). When the conditional shocks, ηt, follow a symmetric distribution, the expected short-run persistence is α1 + γ1 /2, and the contribution of shocks to expected long-run persistence is α1 + γ1 /2 + β1.

An alternative specification that accommodates asymmetries between positive and negative shocks is the EGARCH model of Nelson (1991), namely,

which models the logarithm of conditional volatility. In (8), |ηti| and ηti capture the size and sign effects, respectively, of the standardized shocks. Unlike GARCH and GJR, EGARCH in (8) uses the standardized residuals rather than the unconditional shocks. As EGARCH also uses the logarithm of conditional volatility, there are no restrictions on the parameters in (8). As the standardized shocks have finite moments, the moment conditions of (8) are straightforward.

Nelson (1991) derived the log-moment condition for GARCH(1,1) as

which is important in deriving the statistical properties of the QMLE. Ling and McAleer (2002a) established the log-moment condition for GJR(1,1) as

As E log(1 + zt) ≤ Ezt, setting z = α1ηt2 + β1 − 1 shows that the log-moment condition in (9) can be satisfied even when α1 + β1 > 1 (i.e., in the absence of second moments of the unconditional shocks of the GARCH(1,1) model). Similarly, setting zt = (α1 + γ1 It))ηt2 + β1 − 1 shows that the log-moment condition in (10) can be satisfied even when α1 + γ/2 + β1 > 1 (i.e., in the absence of second moments of the unconditional shocks of the GJR(1,1) model).

3.2. Time-Varying Conditional Correlation GARCH Models

Conditional volatility models are concerned with the second moments of the shocks to returns. Although such shocks are typically assumed to be independent, they are likely to be dependent in practice. Unless ηt is a sequence of i.i.d. random vectors, or alternatively a martingale difference process, the assumption of CCC will not be valid, so that it would be more likely that Γt = {ρijt} for i,j = 1,…,m and t = 1,…,n.

Engle and Kroner (1995) developed the Baba, Engle, Kraft, and Kroner (BEKK) model to capture the time-varying behavior of conditional covariances, as follows:

where the second term in (11) is singular. In addition to including a large number of parameters, which leads to serious computational difficulties, BEKK models the dynamic conditional covariances rather than what is typically of primary interest to practitioners in finance, namely, the dynamic conditional correlations. In the specific model given in (11), BEKK can be interpreted as accommodating serial correlation of unknown form in the standardized residuals. Although it was not considered explicitly in Engle and Kroner (1995), the dynamic conditional correlations associated with BEKK can be derived from Qt = DtΓt Dt as Γt = Dt−1Qt Dt−1, where Dt = (diag Qt)1/2.

To capture the dynamics of time-varying conditional correlation, Γt, Engle (2002) and Tse and Tsui (2002) proposed the closely related dynamic conditional correlation (DCC) model and the variable conditional correlation (VCC) model, respectively, as extensions of the CCC model. No explanation seems to have been given as to how the shocks to returns in either the VCC or DCC model would have to be modified to yield the dynamic structure of the conditional correlations.

The DCC model is given by

where the second term in (12) is singular and θ1 and θ2 are scalar parameters. When θ1 = θ2 = 0, Z in (12) is equivalent to the CCC model. As Zt in (12) is conditional on the vector of standardized residuals, (12) is the conditional covariance matrix. If ηt were a vector of i.i.d. random variables, with zero mean and unit variance, Zt in (12) would also be the conditional correlation matrix (after appropriate standardization). However, there is no discussion of the properties of ηt in developing the DCC model (although Engle, 2002, p. 342, does state that “the errors are a Martingale difference by construction” in suggesting how to estimate the model). As Zt in (12) does not satisfy the definition of a conditional correlation matrix, Engle (2002) calculates the dynamic conditional correlation matrix through the following standardization:

(Equation (25) in Engle (2002) uses the standardization (diag Zt)−1 rather than (diag Zt)−1/2 in formulating DCC, but this is obviously a typographical error.) The standardization of Zt to obtain Γt* in (13) may also be required because the standardized residuals in the DCC model are unlikely to be independently distributed.

The VCC model uses a transformation of the standardized shocks to estimate the time-varying conditional correlations, namely,

where the typical element in the nonsingular second term, which is a lagged recursive conditional correlation matrix, is given by

where Mm. When θ1 = θ2 = 0, Γ in (14) is equivalent to the CCC model. No standardization of (14) is required because it satisfies the definition of a conditional correlation matrix, albeit of standardized shocks, ηit, that are not independently distributed (although they are explicitly—and incorrectly—assumed to be serially independently distributed in Tse and Tsui, 2002, p. 352). The primary structural difference between DCC and VCC is that (13) standardizes Zt to obtain the dynamic conditional correlation matrix, whereas VCC assumes that the time-varying conditional correlation matrix can be calculated recursively using (14).

Chan, Hoti, and McAleer (2003) proposed the generalized autoregressive conditional correlation (GARCC) model, which, unlike the DCC and VCC models, motivates the dynamic structure of the conditional correlations explicitly through serial correlation in the vector of standardized shocks. They showed that, if ηit follows an autoregressive process rather than being a sequence of i.i.d. random vectors, that is,

then a more general dynamic model than DCC and VCC could be obtained when L → ∞, as follows:

where Φ1 and Φ2 are m × m matrices and ○ is the Hadamard (or element by element) product. The first equation in (15) was introduced to the univariate ARCH literature (i.e., for m = 1) by Tsay (1987) as a random coefficient autoregressive approach to deriving ARCH models, with a straightforward extension to univariate GARCH models. Chan, Hoti, and McAleer (2003) showed that, when φil = φ1iδil and δiliid(0,φ2il−1), (16) is the dynamic conditional correlation matrix of the standardized residuals, ηit, which are not independently distributed because of the presence of serial correlation in (15). The GARCC matrix is given by

which makes clear the importance of recognizing the serial correlation in the vector of standardized residuals. Chan, Hoti, and McAleer (2003) show that the standardization in (17) may not be required, in practice, so that (16) is effectively the conditional correlation matrix. However, the standardization in (13) for the DCC model is required as (12) does not satisfy the definition of a conditional correlation matrix of the standardized shocks. As a direct extension of the preceding model, asymmetric effects can be accommodated in GARCC by modifying the conditional covariance matrix in (16) with the indicator function in (4) to produce the asymmetric GARCC (AGARCC) model.

There has been a plethora of interesting practical extensions to the DCC and VCC models, with modifications of one model being directly applicable to the other. Engle (2002) and Hafner and Franses (2003) suggested more general specifications in which the scalar parameters in DCC were replaced by appropriate matrices to relax the restrictions implicit in (12). Cappiello, Engle, and Sheppard (2003) introduced asymmetry into the correlation dynamics of DCC. Billio, Caporin, and Gobbo (2004) and Billio and Caporin (2004) modified the generalized DCC models of Engle (2002) and Hafner and Franses (2003) to develop more flexible DCC models that partition groups (or blocks) of financial commodities into subsets with common dynamic conditional correlations. Kwan, Li, and Ng (2004) extended VCC in a multivariate threshold framework by combining the univariate threshold GARCH model with dynamic conditional correlations. However, to date none of these extensions has examined the asymptotic properties of the respective models.

3.3. Univariate and Multivariate SV Models

In the continuous time SV model, both the asset price and volatility follow diffusion processes, with the logarithm of volatility following the Ornstein–Ühlenbeck process. By using the discrete time approximation to the continuous time SV model and the strong solution of the Ornstein–Ühlenbeck process, the discrete time formulation of the univariate SV model for t = 1,…,n can be represented as an extension of a simple discrete time model in Taylor (1986), namely,

in which the returns to a financial asset in (18) are a nonlinear function of the average volatility level, σ, the shocks to returns, εt, and volatility at time t, ht. The stationary SV equation is given by (20), in which φ is the persistence parameter. Given information at time t, the one-period-ahead forecasts of the mean and variance of ht+1 are φht and ση2, respectively. The negative leverage effect is given as ρ ≤ 0, but it is not necessary to impose the restriction a priori.

Although (18) is equivalent to (19), the transformation to log yt2 in (19) loses information regarding the sign of yt, so that optimal inference based on these two equations would differ in view of (21). However, this information can be recovered using the state space form derived by Harvey and Shephard (1996).

The shocks to returns and volatility may be contemporaneously correlated. The correlation coefficient, ρ, is expected to be negative so as to capture the dynamic leverage between the shocks to returns and shocks to volatility through changes in the debt-equity ratio (for a general discussion of the leverage effect, see Black, 1976, and Christie, 1982, whereas Yu, 2004a, defined the leverage effect in the context of SV models). Although the dynamic leverage effects given in (21) are assumed to be identical for positive and negative shocks, this restriction can be relaxed. Asai and McAleer (2004a) developed the dynamic asymmetric leverage (DAL) SV model to accommodate the differences between positive and negative shocks on dynamic leverage, in addition to incorporating both the sign and magnitude of the previous returns in the SV equation. Yu (2004b) used a special case of the DAL model of Asai and McAleer (2004a) to compare the SV and GARCH models, generalize the news impact function, and reinforce the empirical results by the realized volatility inherent in high frequency intraday data.

The discrete time formulation of the multivariate counterpart of the univariate SV model is given for i,j = 1,…,m and t = 1,…,n as

The off-diagonal terms in Φ in (23) represent the volatility spillover effects from other assets. As P in (24) is the matrix of contemporaneous correlations, the contemporaneous covariance matrix in (24), P ○ Ση, captures the multivariate contemporaneous (dynamic) leverage effects. Separate dynamic leverage effects of positive and negative shocks could be incorporated in (24) and (25) through the use of an appropriate indicator function, as in the VARMA-AGARCH model in (4), or using an extension of a univariate continuous function, as in the DAL model of Asai and McAleer (2004a). A multivariate extension of the dynamic leverage SV model of Harvey and Shephard (1996) is compared with a multivariate extension of the asymmetric leverage SV model of Danielsson (1994) in Asai and McAleer (2004c).

The assumption of independently distributed shocks to returns in (22) and to volatility in (23) means that the contemporaneous covariance matrix in (24) does not vary over time. This independence assumption could be relaxed such that the shocks to returns in (22) are generated by an autoregressive process, as follows:

which follows a multivariate random coefficient autoregressive approach. The presence of serially dependent processes can affect the outcome and interpretation of the multivariate leverage effect in (24) and (25), such that the dynamic correlation matrix of the vector of shocks to returns would follow a specific functional form. Danielsson (1994) considered the special case m = 1 and L = 1. Asai and McAleer (2004b) examined the more general multivariate case in (26) in which (i) εt and ηt in (24) can have a constant contemporaneous correlation matrix, Ση; or (ii) a dynamic correlation matrix, Σεt, which replaces (24) by specifying Σεt to have a vector autoregressive representation with a Wishart distribution. Meyer and Yu (2004) develop several multivariate SV specifications as extensions of some existing models, including specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure.

As a further extension of the multivariate SV model given previously, if the vector of shocks to volatility in (23) were also correlated over time, the dynamic leverage given in (24) and (25) could also be made to vary over time. A specific functional form for the serial dependence of the shocks to volatility in (23) would yield a specific functional form for the time-varying dynamic leverage matrix in (24) and (25).

The random errors in (18) are typically assumed to be normally distributed. Equation (19) makes it clear that likelihood methods cannot be used to estimate the SV parameters as the density function of the logarithm of a squared normal random variable is typically unknown. Recent developments have been based on likelihood-based procedures and on the Bayesian Markov chain Monte Carlo (MCMC) technique.

4. THE POTENTIAL FOR AUTOMATED INFERENCE IN MODELING FINANCIAL VOLATILITY

This section discusses 20 important issues in the specification, estimation, and testing of GARCH and SV models. A PAR index is given, from 1 (low) to 10 (high), and recommendations regarding the possibilities for automated inference in modeling financial volatility are given in each case.

The original development and subsequent extensions of GARCH models have been predominantly in a discrete time framework. Although SV models have typically been derived in a continuous time framework, they have been estimated using both high frequency and ultra-high frequency data. As an exception to the rule, Nelson (1990) investigated ARCH models as an approximation to continuous time SV models and showed that a class of exponential GARCH (EGARCH) models could approximate a range of stochastic differential equations. Drost and Werker (1996) examined continuous time GARCH modeling.

Recommendation 1. Select a volatility model that is appropriate to the data to be analyzed.

It is clear from equation (1) that the GARCH decomposition of returns is additive and independent of volatility, whereas the SV decomposition of returns in (18) (equivalently, (19) while retaining information regarding the sign of yt) is multiplicative and dependent on volatility. The exception to the independent decomposition between returns and volatility is the GARCH-in-mean (GARCH-M) specification, which introduces risk into the conditional mean equation in a similar manner as that of the SV model. In both types of decompositions, the shocks to returns are, in virtually all cases, assumed to be independently distributed.

Recommendation 2. Check for spillovers between returns and risk.

The conditional mean specification is, in general, arbitrary for GARCH models of the conditional volatility, as in (1), whereas it is derived analytically for the SV model, as in (18). Various modifications to the conditional means in both models are possible (see, e.g., Asai and McAleer, 2003a, who introduced holiday effects into the univariate conditional mean of the SV model).

Recommendation 3. The specification of the conditional mean should be tested for its empirical adequacy; otherwise, the conditional variances and covariances may be biased.

As in the case of the conditional mean specification, the conditional variance, conditional covariance, and conditional correlation specifications are also generally arbitrary for GARCH models, as in (2), (3), (4), (8), (11), (12), (14), and (16), whereas the stochastic variance equation is derived analytically for the SV model, as in (20) and (23). As before, various modifications to the conditional variances in both models are possible (e.g., Asai and McAleer, 2003a, introduced holiday effects into the univariate stochastic variance of the SV model and the conditional variance of the EGARCH model).

Recommendation 4. The specification of volatility should be tested for its empirical adequacy. It will typically be satisfactory to use univariate GARCH(1,1), GJR(1,1), EGARCH(1,1), or SV AR(1) models.

On the basis of the specification of the conditional mean and conditional variance, it is clear that the joint specification of the mean and volatility are arbitrary for GARCH but can be undertaken simultaneously for the SV model. The arbitrariness in the decomposition of the returns, in addition to the specification of the conditional mean, variances, covariances, and correlations in the development of a variety of univariate and multivariate GARCH models, might be referred to as the “Mitsubishi advertisement modeling approach” (MAMA). This is encapsulated in the advertising motto: “Please consider.” Although the arbitrary specifications might be regarded as self-evident, there are few exceptions to the rule that theoretical arguments relating to the conditional mean, variance, covariance, and correlation specifications in modeling GARCH are generally absent. This is in marked contrast to the joint specification of the mean and volatility for SV models.

Recommendation 5. The joint specification of the conditional mean and conditional volatility should be tested for their empirical adequacy; otherwise, the conditional variances and covariances may be biased.

Regularity conditions regarding strict stationarity, ergodicity, and the existence of moments have been derived for a wide range of univariate and multivariate GARCH models. The existence of moments is a necessary and sufficient (sufficient) condition for strict stationarity and ergodicity in univariate (multivariate) GARCH models. Bollerslev (1986) showed that the necessary and sufficient condition for the existence of the second moment of εt, that is, Et2) < ∞, for the case r = s = 1 is α1 + β1 < 1 and also analyzed the GARCH(p,q) model. Ling and McAleer (2003) established the moment conditions for univariate and multivariate GARCH models, and Ling and McAleer (2002b) derived the necessary and sufficient conditions for the asymmetric power GARCH model of Ding, Granger, and Engle (1993).

In comparison with the various GARCH models that are available, SV models are based on stationary processes.

Recommendation 6. The appropriate regularity conditions should be evaluated as diagnostic checks of the structural properties of the underlying models. In particular, the log-moment condition is weaker, and hence more useful, than the moment conditions.

The log-moment condition, as given in (9) and (10) for the GARCH(1,1) and GJR(1,1) models, respectively, is a weak regularity condition that does not require the existence of moments. This result makes it clear that the unconditional variance of the shocks to returns can be infinite for GARCH models, and hence fat-tailed distributions are likely to be more appropriate than their normal counterpart. Moreover, there is no statistical reason for using integrated GARCH models. In comparison, SV models are based on stationary processes so that the unconditional variance of shocks to returns is finite.

Recommendation 7. As the unconditional variance need not be finite for the log-moment regularity condition to be satisfied, it is unnecessary to estimate an integrated process for volatility models.

The two most popular asymmetric GARCH models are the threshold GJR model of Glosten et al. (1992) and the EGARCH model of Nelson (1991). The structural properties of the GJR(1,1) model were established by Ling and McAleer (2002a), who showed that the necessary and sufficient condition for Et2) < ∞ is α1 + γ1 /2 + β1 < 1. For the case p = q = 1 in (8), Nelson (1991) showed that |β1| < 1 ensures stationarity and ergodicity for EGARCH(1,1). Caporin and McAleer (2004a) incorporated dynamic asymmetric effects into the GARCH model to distinguish between small and large, and positive and negative, shocks.

As compared with the GARCH literature, the interpretation of asymmetric and leverage effects in SV models is not entirely clear. Jacquier, Polson, and Rossi (2004) undertook a Bayesian analysis of a discrete time univariate SV model with fat tails and correlated errors. Yu (2004a) showed that it was not clear how to ensure and interpret the leverage effect in their model and advocated the Euler approximation to the continuous time SV model with the leverage effect. So, Li, and Lam (2002) proposed a threshold SV model in which the constant and the autoregressive parameter in the SV equation change according to the sign of the previous return. Asai and McAleer (2003b) examined two methods for modeling asymmetries in SV models, namely, the indicator function threshold effects approach of Glosten et al. (1992), as suggested by Harvey and Shephard (1996), and the dynamic leverage effects given in equations (21) and (25). Asai and McAleer (2004a) modified the dynamic leverage parameter in (21) to develop a dynamic asymmetric leverage SV model with differential impacts from positive and negative shocks, with leverage being greater for negative shocks.

Recommendation 8. Asymmetric and dynamic leverage effects should be estimated to provide diagnostic checks of the underlying univariate models; otherwise, the conditional variances and covariances may be biased.

The GARCH and SV models available in the literature do not have analytical expressions for their associated estimators, and even the likelihood function for SV models does not have an analytical expression. This computational difficulty can lead to significant problems in developing an automated approach to model financial volatility.

Weiss (1986) and Pantula (1989) analyzed the statistical properties of the ARCH(p) model and established the consistency and asymptotic normality of the QMLE under the existence of fourth moments of the unconditional shocks. These results were followed by a host of other developments, specifically related to the sufficient conditions for the consistency and asymptotic normality of the QMLE for GARCH(p,q) models. The fourth (sixth) moment is sufficient for the local (global) QMLE to be asymptotically normal (Ling and McAleer, 2003). The log-moment condition in (9) and its generalization to GARCH(p,q) are sufficient for consistency of the QMLE of univariate and multivariate GARCH models (Jeantheau, 1998) and sufficient for asymptotic normality of the QMLE of univariate GARCH models (Boussama, 2000). The sufficient conditions for consistency and asymptotic normality of the GJR(1,1) model were established by McAleer, Chan, and Marinova (2002), whereas those for GJR(p,q) may be obtained as a special case of the results for ARMA-AGARCH in Chan et al. (2002). By comparison, the statistical properties of the univariate and multivariate EGARCH models (Nelson, 1991) have not yet been formally developed. However, as the innovations in EGARCH are assumed to be i.i.d., the statistical properties of univariate EGARCH are likely to be natural extensions of univariate ARMA processes. For the case p = q = 1 in (8), Shephard (1996) observed that |β1| < 1 may be a sufficient condition for consistency of the QMLE for EGARCH(1,1), and McAleer et al. (2002) suggested that |β1| < 1 may also be a sufficient condition for the existence of moments and for asymptotic normality of the QMLE. These regularity conditions may be used as diagnostic checks of the underlying models. Discrimination criteria such as the Akaike information criterion (AIC) and Bayesian information criterion (BIC) are also useful in the optimal selection of GARCH models.

The standard methods of estimating univariate SV models are numerical or Monte Carlo likelihood (MCL) and various Bayesian MCMC techniques. Recent developments in numerical likelihood methods are concerned with sampling theory based on yt in (18) and based on log yt2 in (19). The optimal numerical likelihood method based on yt, namely, the evaluation of the likelihood through recursive numerical integration, has been considered by Fridman and Harris (1998) and Watanabe (1999), and the optimal Monte Carlo maximum likelihood method based on log yt2 has been proposed by Sandmann and Koopman (1998). The approaches based on log yt2 would need to be modified to accommodate asymmetric effects. Jacquier et al. (1994) proposed a Bayesian MCMC technique based on a single-move sampler that requires sampling each ht in (18). Two methods are more efficient than the single-move sampler, namely, the multimove sampler of Shephard and Pitt (1997, and correction: 2004) (see also Watanabe and Omori, 2004) and the integration sampler of Kim, Shephard, and Chib (1998) and Chib, Nardari, and Shephard (2002). Bayesian MCMC methods have also been proposed in the literature for estimating SV models with leverage: Yu (2004b) used the single-move sampler, whereas Omori, Chib, Shephard, and Nakajima (2004) used the mixture sampler that was developed by Kim et al. (1998). The numerical likelihood techniques yield estimators that are consistent and asymptotically normal, whereas the Bayesian MCMC inferences are exact. In analyzing the specification of SV models, it would also be useful to consider the marginal likelihood, harmonic mean estimation, and the deviance information criterion (DIC) of Berg, Meyer, and Yu (2004).

Recommendation 9. The appropriate regularity conditions and model selection criteria should be evaluated as diagnostic checks of the properties of the underlying univariate models.

In empirical research, it has generally been found that the persistence of shocks to volatility is captured reasonably well by simple GARCH and SV models. The persistence of shocks needs to be modeled accurately as this can affect the structural, asymptotic, and empirical properties of estimated volatility models. For example, although the long-run persistence need not be less than unity for the QMLE of the GARCH(1,1) model to be consistent and asymptotically normal, an excessively high long-run persistence can lead to the log-moment condition being violated. Taylor (1994) and Shephard (1996), among others, have found that the persistence in volatility implied by GARCH(1,1) models is, in general, higher than the persistence implied by AR(1) SV models.

Recommendation 10. The persistence of shocks needs to be modeled accurately.

Empirical research has also found that the kurtosis in the distributions of shocks is captured reasonably well by simple GARCH and SV models. Although normality is not necessary for the consistency or asymptotic normality of the QMLE for GARCH models, the QMLE can be inefficient when the standardized shocks are not normal. Moreover, whether the distribution of the shocks is normal or fat-tailed can affect inferences arising from empirical volatility models. Shephard (1996), among others, has found that the assumption of normality is adequate for the AR(1) SV model, whereas a fat-tailed distribution, such as Student t, is required to make the GARCH(1,1) model comparable with its SV counterpart. Hall and Yao (2003) showed that, for GARCH models with heavy-tailed errors, the QMLE suffers from complex limit distributions and slow convergence rates.

Carnero, Pena, and Ruiz (2004) found that the AR(1) SV model is more flexible than its GARCH(1,1) counterpart in representing simultaneously the persistence of shocks and excess kurtosis implied by these models.

Recommendation 11. The kurtosis in the distributions of shocks needs to be modeled accurately.

Outliers have a long history in statistics but far less so in econometrics. Chow (1960) introduced two test statistics to econometrics, namely, a test for structural change and a test of predictive failure, otherwise known as Chow's first and second tests, respectively. The test of predictive failure is a test of forecast accuracy under the assumption of correct model specification. If the model is correct, it should forecast accurately; if it does not do so, the model is incorrect as long as the data are accurately observed and do not contain outliers. Chow's second test is also a test of outliers if the model is correct; if the model does not forecast adequately, it may be due to outliers in the data (for further details, see McAleer and Tse, 1988). In financial econometrics, a single outlier can also affect the structural and statistical properties of the QMLE of GARCH models.

If a single outlier is not deleted from the sample, it may hide volatility that does, in fact, exist, thereby leading to the problem of what might be called hidden volatility.

Recommendation 12. It is essential to check for the presence of a single outlier and hidden volatility.

Spurious suggests a lack of genuine content. For example, a regression involving nonstationary processes may be deemed spurious because the statistical significance is based on an inappropriate asymptotic distribution. In financial econometrics, sequential outliers can also affect the structural and statistical properties of the QMLE of GARCH models. Volatility may be spurious if it is found to be statistically significant even though it does not exist. An empirical finding of spurious volatility is less likely to depend on the use of an incorrect asymptotic distribution than on biases arising from the presence of sequential outliers. Sakata and White (1998) analyze the impact of outliers on volatility modeling through the use of high breakdown point conditional dispersion estimation.

If sequential outliers are not deleted, they may mimic volatility that does not exist, thereby leading to the problem of what might be called spurious volatility. There has been very little discussion in the volatility literature to date on this potentially serious problem.

Recommendation 13. It is essential to check for the presence of sequential outliers and spurious volatility.

A univariate moving average process in the unconditional shocks is a special case of the VARMA-GARCH model. However, serial correlation in the standardized residuals, which will affect the interpretation of the conditional variance, has been all but ignored in the GARCH literature. Two exceptions are the random coefficient autoregressive approach of Tsay (1987) and a special case of GARCC, in which serial correlation in the standardized residuals of a univariate GARCH model can be obtained by setting m = 1 in equation (15).

Danielsson (1994) examined an SV model with an AR(1) process in the unconditional shocks to returns in (18) (i.e., with m = 1 and L = 1 in (26)) and considered a simulation-based method for computing the maximum likelihood estimates. (The AR(1) SV process in equation (1) in Danielsson, 1994, uses a different parameterization from that in equation (20) in this paper.)

Recommendation 14. It is important to test for the presence of serial correlation in the univariate shocks to returns.

As discussed in Section 3, this area has been the subject of substantial recent research, especially since 2002. Several of these models are already available in standard econometric software packages such as RATS and EViews. Multivariate models include the diagonal model of Bollerslev, Engle, and Wooldridge (1988), the vech (or VAR) model of Engle and Kroner (1995), CCC, BEKK, VARMA-GARCH, VARMA-AGARCH, DCC, VCC, GARCC, asymmetric DCC, various generalized DCC models, flexible DCC, and asymmetric GARCC. Chan, Hoti, and McAleer (2003) established the structural properties of GARCC, which include the necessary and sufficient conditions for stationarity and ergodicity, and sufficient conditions for the existence of moments.

In comparison with the expansive published and unpublished research in multivariate GARCH models, to date there have been few published papers in the analysis of multivariate SV models. Harvey, Ruiz, and Shephard (1994) developed the first multivariate SV model as an extension of the univariate SV model, but they did not include any leverage effects, interdependence (or spillovers) across stochastic volatilities, or dynamic correlations of shocks within and between the returns and stochastic volatility equations. Danielsson (1998), Pitt and Shephard (1999), and Liesenfeld and Richard (2003) are among the few published papers in which multivariate SV models have been estimated. Alternative multivariate SV models have been analyzed and developed by Jacquier, Polson, and Rossi (1995, 1999), Chib, Nardari, and Shephard (2001), Chan, Kohn, and Kirby (2003), Asai and McAleer (2004b, 2004c), and Meyer and Yu (2004). Gourieroux, Jasiak, and Sufana (2004) proposed the Wishart autoregressive (WAR) multivariate process of stochastic positive semidefinite matrices to develop an altogether different type of dynamic SV model.

Recommendation 15. The specification of the multivariate conditional mean should be tested for its empirical adequacy; otherwise, the conditional variances and covariances may be biased. However, the conditional correlations need not be biased.

The QMLE method has typically been used to maximize the conditional likelihood functions of various multivariate GARCH models. Ling and McAleer (2003), Chan et al. (2002), and Chan, Hoti, and McAleer (2003) established the structural properties, including the existence of moment conditions, of the VARMA-GARCH, VARMA-AGARCH, and GARCC models, respectively. Comte and Lieberman (2003) assumed but did not derive the regularity conditions for the BEKK model and did not consider the presence of serial correlation in the standardized residuals as an underlying cause of the dynamic conditional covariance matrix.

To state the obvious, the problems inherent in estimating univariate SV models are exacerbated for their multivariate counterparts. The most common technique for estimating multivariate SV models is the Bayesian MCMC method, with the high dimensionality of the models being a serious practical problem. Liesenfeld and Richard (2003) applied the maximum likelihood approach based upon an efficient importance sampling (ML-EIS) procedure to a multivariate factor model with SV and also used the EIS approach for filtering to yield several diagnostic tests.

Recommendation 16. Multivariate conditional volatility models are typically estimated using QMLE, whereas the most common technique for estimating multivariate SV models is the Bayesian MCMC method.

Although there are many multivariate GARCH models, there are few papers that have examined the structural properties of the models and the statistical properties of the estimators. Ling and McAleer (2003) showed that the multivariate second moment was sufficient for consistency of the QMLE for the VARMA-GARCH model. Chan et al. (2002) and Chan, Hoti, and McAleer (2003) established the consistency of the QMLE for the VARMA-AGARCH and GARCC models, respectively, under appropriate multivariate log-moment conditions. Each of these authors established asymptotic normality of the local (global) QMLE of their respective models under verifiable fourth (sixth) moment conditions. Engle and Sheppard (2001) assumed that the (unstated and hence unverifiable) regularity conditions were satisfied and discussed how the statistical properties of DCC could be established. Comte and Lieberman (2003) established consistency of the QMLE for the BEKK model using the conditions regarding multivariate log-moments in Jeantheau (1998) and established asymptotic normality by assuming the existence of eighth moments.

The statistical properties of the various estimators of multivariate SV models follow from their univariate counterparts. Multivariate Bayesian MCMC techniques yield exact inferences, whereas alternative numerical likelihood estimators yield estimators that are consistent and asymptotically normal.

Recommendation 17. The appropriate regularity conditions should be evaluated as diagnostic checks of the statistical properties of the underlying multivariate models.

As discussed in Section 3, multivariate asymmetric effects have been examined explicitly in the VARMA-AGARCH, AGARCC, asymmetric DCC (ADCC), and dynamic asymmetric multivariate GARCH (DAMGARCH) (Caporin and McAleer, 2004b) models. Asymmetric effects have been considered implicitly in estimating multivariate EGARCH, but the statistical properties of multivariate EGARCH have not been developed formally. As the innovations in EGARCH are assumed to be i.i.d., the statistical properties of multivariate EGARCH would be expected to follow as natural extensions of ARMA processes (for further details, see Shephard, 1996; McAleer et al., 2002).

An extension of the univariate dynamic leverage effect in SV models in (21) can be accommodated through the specifications in (24) and (25). Jacquier et al. (1995) and Chan, Kohn, and Kirby (2003) extended the multivariate SV model to incorporate various types of leverage effects, as in equation (24), although these papers do not guarantee the existence of leverage effects (for further details, see Yu, 2004a, in the context of Jacquier, Polson, and Rossi, 2004; Meyer and Yu, 2004). Danielsson (1998) suggested a multivariate extension of the asymmetric SV model analyzed in Harvey and Shephard (1996). Asai and McAleer (2004b) proposed two types of correlation structures for multivariate SV models, namely, the constant correlation (CC) and dynamic correlation (DC) SV models, as multivariate extensions of the univariate asymmetric SV models in Danielsson (1994) and Asai and McAleer (2004a). Meyer and Yu (2004) also developed a multivariate SV model with time-varying correlations. Multivariate extensions of the dynamic leverage and asymmetric leverage models are compared in Asai and McAleer (2004c).

Recommendation 18. Asymmetric and dynamic leverage effects should be estimated to provide diagnostic checks of the underlying multivariate models; otherwise, the conditional variances and covariances may be biased. However, the conditional correlations need not be biased.

A multivariate moving average process in the unconditional shocks is a special case of the VARMA-GARCH model. The GARCC model is the only volatility specification that explicitly considers serial correlation in the vector of standardized residuals. Serially dependent processes are not discussed in the development of either the DCC or VCC models, or of any extensions to these models, in spite of the fact that serial correlation in the standardized residuals, or alternatively martingale difference processes, provides a clear motivation for dynamic conditional correlations to exist. The BEKK model implicitly accommodates serially dependent processes, but of unknown form, in (11).

Asai and McAleer (2004b) analyzed dynamic correlations in multivariate SV models using an approach that accommodates serial correlation. Their model can be extended, under appropriate parametric restrictions and upon relaxing the independence assumption of the shocks to the returns in (22) and the shocks in (23), to accommodate serially dependent processes explicitly, as follows: an ARMA process in the vector of shocks to returns in (22), an MA process in the vector of shocks to stochastic volatility in (23), and an ARMA process in the correlations of the two vectors of shocks in (24) and (25).

Recommendation 19. It is important to test for the presence of serial correlation in the multivariate shocks to returns.

GARCH models were not developed specifically for continuous time data, so it is perhaps not surprising that such models do not seem to have been widely used for modeling prices, volumes, and spreads with ultra-high frequency data. Although the autoregressive conditional duration (ACD) model of Engle and Russell (1998) measures duration, it does not (yet) seem to have been extended to the empirical analysis of prices, volumes, or spreads.

In recent years, realized volatility has come to be directly related to the volatility literature (see the influential papers by Andersen, Bollerslev, Diebold, and Ebens, 2001; Andersen, Bollerslev, Diebold, and Labys, 2001; and Barndorff-Nielsen and Shephard, 2002a, 2002b; and the survey by Andersen, Bollerslev, and Diebold, 2002). Realized volatility has changed the way in which volatility is normally modeled. This is due to the fact that ultra-high frequency volatility can now be treated as being directly observed in the market and is an area where the automated inference approach might become very useful. In addition to SV models, which would seem to be ideally suited to model ultra-high frequency data, a variety of nonparametric models may also be used to model such data.

The PAR index may not be appropriate for analyzing all ultra-high frequency data, as it will depend on the purpose of the empirical analysis. If the high frequency data are used to model and calculate volatility, then the automated approach is likely to be useful. However, the availability of ultra-high frequency data may make it possible to use an accurate approximation of the unobserved volatility. It may then be feasible to treat high frequency volatility as being directly observable, so that the PAR index would be lower.

Recommendation 20. New GARCH, SV, nonparametric, and realized volatility models of ultra-high frequency prices, volumes, and spreads should be developed, estimated, and tested against each other.

5. CONCLUDING REMARKS

This paper used the specific-to-general methodological approach that is widely used in science, in which problems with existing theories are resolved as the need arises, to illustrate a number of important developments in the modeling of univariate and multivariate financial volatility. Some of the difficulties in analyzing time-varying univariate and multivariate GARCH and SV models included the number of parameters to be estimated and the computational complexities associated with some multivariate GARCH models and both univariate and multivariate SV models. For these reasons, among others, automated inference in its present state was argued to require various modifications and extensions for modeling in empirical financial econometrics.

As a contribution to the development of automated inference in modeling financial volatility, 20 important issues in the specification, estimation, and testing of GARCH and SV models were discussed. A potential for automation rating (PAR) index and recommendations regarding the possibilities for automated inference in modeling financial volatility were given in each case.

References

REFERENCES

Andersen, T.G., T. Bollerslev, & F.X. Diebold (2002) Parametric and nonparametric volatility measurement. In Y. Aït-Sahalia & L.P. Hansen (eds.), Handbook of Financial Econometrics. North-Holland (forthcoming).
Andersen, T.G., T. Bollerslev, F.X. Diebold, & H. Ebens (2001) The distribution of realized stock return volatility. Journal of Financial Economics 61, 4376.Google Scholar
Andersen, T.G., T. Bollerslev, F.X. Diebold, & F. Labys (2001) The distribution of realized exchange rate volatility. Journal of the American Statistical Association 96, 4255.Google Scholar
Asai, M. & M. McAleer (2003a) Trading Day Effects in Stochastic Volatility and Exponential GARCH Models. Manuscript, Faculty of Economics, Tokyo Metropolitan University.
Asai, M. & M. McAleer (2003b) Dynamic Leverage and Threshold Effects in Stochastic Volatility Models. Manuscript, Faculty of Economics, Tokyo Metropolitan University.
Asai, M. & M. McAleer (2004a) Dynamic Asymmetric Leverage in Stochastic Volatility Models. Manuscript, Faculty of Economics, Tokyo Metropolitan University.
Asai, M. & M. McAleer (2004b) Dynamic Correlations in Stochastic Volatility Models. Manuscript, Faculty of Economics, Tokyo Metropolitan University.
Asai, M. & M. McAleer (2004c) Multivariate Asymmetric Stochastic Volatility Models. Manuscript, Faculty of Economics, Tokyo Metropolitan University.
Barndorff-Nielsen, O.E. & N. Shephard (2002a) Econometric analysis of realised volatility and its use in estimating stochastic volatility models. Journal of the Royal Statistical Society, Series B 64, 253280.Google Scholar
Barndorff-Nielsen, O.E. & N. Shephard (2002b) Estimating quadratic variation using realized variance. Journal of Applied Econometrics 17, 457478.Google Scholar
Berg, A., R. Meyer, & J. Yu (2004) Deviance information criterion for comparing stochastic volatility models. Journal of Business & Economic Statistics 22, 107120.Google Scholar
Billio, M. & M. Caporin (2004) A Generalised Dynamic Conditional Correlation Model for Portfolio Risk Evaluation. Manuscript, Department of Economics, University of Venice “Ca' Foscari.”
Billio, M., M. Caporin, & M. Gobbo (2004) Flexible Dynamic Conditional Correlation Multivariate GARCH for Asset Allocation. Manuscript, Department of Economics, University of Venice “Ca' Foscari.”
Black, F. (1976) Studies of stock market volatility changes. In 1976 Proceedings of the American Statistical Association, Business and Economic Statistics Section, pp. 177181.
Bollerslev, T. (1986) Generalised autoregressive conditional heteroscedasticity. Journal of Econometrics 31, 307327.Google Scholar
Bollerslev, T. (1990) Modelling the coherence in short-run nominal exchange rates: A multivariate generalized ARCH approach. Review of Economics and Statistics 72, 498505.Google Scholar
Bollerslev, T., R.F. Engle, & J.M. Wooldridge (1988) A capital asset pricing model with time varying covariance. Journal of Political Economy 96, 116131.Google Scholar
Boussama, F. (2000) Asymptotic normality for the quasi-maximum likelihood estimator of a GARCH model. Comptes Rendus de l'Academie des Sciences, Serie I 331, 8184 (in French).Google Scholar
Caporin, M. & M. McAleer (2004a) Dynamic Asymmetric GARCH. Manuscript, Department of Economics, University of Venice “Ca' Foscari.”
Caporin, M. & M. McAleer (2004b) DAMGARCH: A Dynamic Asymmetric Multivariate GARCH Model. Manuscript, Department of Economics, University of Venice “Ca' Foscari.”
Cappiello, L., R.F. Engle, & K. Sheppard (2003) Asymmetric Dynamics in the Correlations of Global Equity and Bond Returns. European Central Bank Working paper 204.
Carnero, M.A., D. Pena, & E. Ruiz (2004) Persistence and kurtosis in GARCH and stochastic volatility models. Journal of Financial Econometrics 2, 319342.Google Scholar
Chan, D., R. Kohn, & C. Kirby (2003) Multivariate Stochastic Volatility with Leverage. Manuscript, School of Economics, University of New South Wales.
Chan, F., S. Hoti, & M. McAleer (2002) Structure and Asymptotic Theory for Multivariate Asymmetric Volatility: Empirical Evidence for Country Risk Ratings. Paper presented to the 2002 Australasian Meeting of the Econometric Society, Brisbane, Australia, July 2002.
Chan, F., S. Hoti, & M. McAleer (2003) Generalized Autoregressive Conditional Correlation. Manuscript, School of Economics and Commerce, University of Western Australia.
Chib, S., F. Nardari, & N. Shephard (2001) Analysis of High Dimensional Multivariate Stochastic Volatility Models. Manuscript, School of Business, Washington University, St. Louis.
Chib, S., F. Nardari, & N. Shephard (2002) Markov chain Monte Carlo methods for stochastic volatility models. Journal of Econometrics 108, 281316.Google Scholar
Chow, G.C. (1960) Tests of equality between sets of coefficients in two linear regressions. Econometrica 28, 591605.Google Scholar
Christie, A.A. (1982) The stochastic behavior of common stock variances: Value, leverage and interest rate effects. Journal of Financial Economics 10, 407432.Google Scholar
Comte, F. & O. Lieberman (2003) Asymptotic theory for multivariate GARCH processes. Journal of Multivariate Analysis 84, 6184.Google Scholar
Danielsson, J. (1994) Stochastic volatility in asset prices: Estimation with simulated maximum likelihood. Journal of Econometrics 64, 375400.Google Scholar
Danielsson, J. (1998) Multivariate stochastic volatility models: Estimation and a comparison with VGARCH models. Journal of Empirical Finance 5, 155173.Google Scholar
Dharmapala, D. & M. McAleer (1996) Econometric methodology and the philosophy of science. Journal of Statistical Planning and Inference 49, 937.Google Scholar
Ding, Z., C.W.J. Granger, & R.F. Engle (1993) A long memory property of stock market returns and a new model. Journal of Empirical Finance 1, 83106.Google Scholar
Drost, F. & B. Werker (1996) Closing the GARCH gap: Continuous-time GARCH modeling. Journal of Econometrics 74, 3157.Google Scholar
Engle, R.F. (1982) Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econometrica 50, 9871007.Google Scholar
Engle, R.F. (2002) Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models. Journal of Business & Economic Statistics 20, 339350.Google Scholar
Engle, R.F. & K.F. Kroner (1995) Multivariate simultaneous generalized ARCH. Econometric Theory 11, 122150.Google Scholar
Engle, R.F. & J.R. Russell (1998) Autoregressive conditional duration: A new model for irregularly spaced transactions data. Econometrica 66, 11271162.Google Scholar
Engle, R.F. & K. Sheppard (2001) Theoretical and Empirical Properties of Dynamic Conditional Correlation Multivariate GARCH. Working paper 8554, National Bureau of Economic Research.
Fridman, M. & L. Harris (1998) A maximum likelihood approach for non-Gaussian stochastic volatility models. Journal of Business & Economic Statistics 16, 284291.Google Scholar
Glosten, L., R. Jagannathan, & D. Runkle (1992) On the relation between the expected value and volatility of nominal excess return on stock. Journal of Finance 46, 17791801.Google Scholar
Gourieroux, C., J. Jasiak, & R. Sufana (2004) The Wishart Autoregressive Process of Multivariate Stochastic Volatility. Manuscript, CRES and CEPREMAP.
Granger, C.W.J. & D.F. Hendry (2005) A dialogue concerning a new instrument for econometric modeling. Econometric Theory (this issue).Google Scholar
Hafner, C. & P.H. Franses (2003) A Generalized Dynamic Conditional Correlation Model for Many Asset Returns. Econometric Institute Report EI 2003-18, Erasmus University, Rotterdam.
Hall, P. & Q. Yao (2003) Inference in ARCH and GARCH models with heavy-tailed errors. Econometrica 71, 285317.Google Scholar
Hansen, P.R., A. Lunde, & J.M. Nason (2003) Choosing the best volatility models: The model confidence set approach. Oxford Bulletin of Economics and Statistics 65, 839861.Google Scholar
Harvey, A.C., E. Ruiz, & N. Shephard (1994) Multivariate conditional variance models. Review of Economic Studies 61, 247264.Google Scholar
Harvey, A.C. & N. Shephard (1996) Estimation of an asymmetric stochastic volatility model for asset returns. Journal of Business & Economic Statistics 14, 429434.Google Scholar
Jacquier, E., N.G. Polson, & P.E. Rossi (1994) Bayesian analysis of stochastic volatility models. Journal of Business & Economic Statistics 12, 371389.Google Scholar
Jacquier, E., N.G. Polson, & P. Rossi (1995) Models and Priors for Multivariate Stochastic Volatility. CIRANO Working paper 95s-18, Montreal.
Jacquier, E., N.G. Polson, & P. Rossi (1999) Stochastic Volatility: Univariate and Multivariate Extensions. Manuscript, Department of Finance, Boston College.
Jacquier, E., N.G. Polson, & P.E. Rossi (2004) Bayesian analysis of stochastic volatility models with fat-tails and correlated errors. Journal of Econometrics (forthcoming).Google Scholar
Jeantheau, T. (1998) Strong consistency of estimators for multivariate ARCH models. Econometric Theory 14, 7086.Google Scholar
Keuzenkamp, H. & M. McAleer (1995) Simplicity, scientific inference and econometric modelling. Economic Journal 105, 121.Google Scholar
Kim, S., N. Shephard, & S. Chib (1998) Stochastic volatility: Likelihood inference and comparison with ARCH models. Review of Economic Studies 65, 361393.Google Scholar
Kwan, C.K., W.K. Li, & K. Ng (2004) A Multivariate Threshold GARCH Model with Time-Varying Correlations. Manuscript, Department of Statistics and Actuarial Science, University of Hong Kong.
Liesenfeld, R. & J.-F. Richard (2003) Univariate and multivariate stochastic volatility models: Estimation and diagnostics. Journal of Empirical Finance 10, 505531.Google Scholar
Ling, S. & M. McAleer (2002a) Stationarity and the existence of moments of a family of GARCH processes. Journal of Econometrics 106, 109117.Google Scholar
Ling, S. & M. McAleer (2002b) Necessary and sufficient moment conditions for the GARCH(r,s) and asymmetric power GARCH(r,s) models. Econometric Theory 18, 722729.Google Scholar
Ling, S. & M. McAleer (2003) Asymptotic theory for a vector ARMA-GARCH model. Econometric Theory 19, 278308.Google Scholar
McAleer, M. (1994) Sherlock Holmes and the search for truth: A diagnostic tale. Journal of Economic Surveys 8, 317370. Reprinted in L. Oxley, D.A.R. George, C.J. Roberts, & S. Sayer (eds.), Surveys in Econometrics (Basil Blackwell, 1994), pp. 299–349.Google Scholar
McAleer, M., F. Chan, & D. Marinova (2002) An Econometric Analysis of Asymmetric Volatility: Theory and Application to Patents. Paper presented to the Australasian meeting of the Econometric Society, Brisbane, Australia, July 2002. Journal of Econometrics (forthcoming).Google Scholar
McAleer, M. & Y.K. Tse (1988) A sequential testing procedure for outliers and structural change. Econometric Reviews 7, 103111.Google Scholar
Meyer, R. & J. Yu (2004) Multivariate Stochastic Volatility Models: Bayesian Estimation and Model Comparison. Manuscript, School of Economics and Social Science, Singapore Management University.
Nelson, D.B. (1990) ARCH models as diffusion processes. Journal of Econometrics 45, 738.Google Scholar
Nelson, D.B. (1991) Conditional heteroscedasticity in asset returns: A new approach. Econometrica 59, 347370.Google Scholar
Omori, Y., S. Chib, N. Shephard, & J. Nakajima (2004) Stochastic Volatility with Leverage: Fast Likelihood Inference. Economics Working Paper 2004-W19, Nuffield College, Oxford.
Pantula, S.G. (1989) Estimation of autoregressive models with ARCH errors. Sankhya B 50, 119138.Google Scholar
Pitt, M.K. & N. Shephard (1999) Time-varying covariances: A factor stochastic volatility approach (with discussion). In J.M. Bernardo, J.O. Berger, A.P. Dawid, & A.F.M. Smith (eds.), Bayesian Statistics, vol. 6, pp. 547570. Oxford University Press.
Sakata, S. & H. White (1998) High breakdown point conditional dispersion estimation with application to S&P 500 daily returns volatility. Econometrica 66, 529567.Google Scholar
Samuelson, P.A. (1975) The art and science of macromodels over 50 years. In G. Fromm & L.R. Klein (eds.), The Brookings Model: Perspective and Recent Developments, pp. 310. North-Holland.
Sandmann, G. & S.J. Koopman (1998) Estimation of stochastic volatility models via Monte Carlo maximum likelihood. Journal of Econometrics 87, 271301.Google Scholar
Shephard, N. (1996) Statistical aspects of ARCH and stochastic volatility. In O.E. Barndorff-Nielsen, D.R. Cox, & D.V. Hinkley (eds.), Statistical Models in Econometrics, Finance and Other Fields, pp. 167. Chapman & Hall.
Shephard, N. & M.K. Pitt (1997) Likelihood analysis of non-Gaussian measurement time series. Biometrika 84, 653667. Correction (2004), 91, 249–250.Google Scholar
So, M.K.P., W.K. Li, & K. Lam (2002) A threshold stochastic volatility model. Journal of Forecasting 21, 473500.Google Scholar
Taylor, S.J. (1986) Modelling Financial Time Series. Wiley.
Taylor, S.J. (1994) Modelling stochastic volatility: A review and comparative study. Mathematical Finance 4, 183204.Google Scholar
Tsay, R.S. (1987) Conditional heteroscedastic time series models. Journal of the American Statistical Association 82, 590604.Google Scholar
Tse, Y.K. & A.K.C. Tsui (2002) A multivariate generalized autoregressive conditional heteroscedasticity model with time-varying correlations. Journal of Business & Economic Statistics 20, 351362.Google Scholar
Watanabe, T. (1999) A non-linear filtering approach to stochastic volatility models with an application to daily stock returns. Journal of Applied Econometrics 14, 101121.Google Scholar
Watanabe, T. & Y. Omori (2004) A multi-move sampler for estimating non-Gaussian time series models: Comments on Shephard & Pitt (1997). Biometrika 91, 246248.Google Scholar
Weiss, A.A. (1986) Asymptotic theory for ARCH models: Estimation and testing. Econometric Theory 2, 107131.Google Scholar
Yu, J. (2004a) On leverage in a stochastic volatility model. Journal of Econometrics (forthcoming).Google Scholar
Yu, J. (2004b) Asymmetric Response of Volatility: Evidence from Stochastic Volatility Models and Realized Volatility. Manuscript, School of Economics and Social Science, Singapore Management University.
Zellner, A., H.A. Keuzenkamp, & M. McAleer (2001), Simplicity, Inference and Modeling: Keeping It Sophisticatedly Simple. Cambridge University Press.