Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-06T04:39:54.817Z Has data issue: false hasContentIssue false

AN INTERVIEW WITH CHRISTOPHER A. SIMS

Published online by Cambridge University Press:  06 April 2004

LARS PETER HANSEN
Affiliation:
University of Chicago November 5, 2003
Rights & Permissions [Opens in a new window]

Abstract

Christopher Sims is a well-known intellectual leader in time-series econometrics and applied macroeconomics. Among his many honors and distinctions, he has been the President of the Econometric Society and he is a member of the National Academy of Sciences. He has made fundamental contributions to both statistical theory of time series and empirical macroeconomics. Sims' work is influential precisely because it was motivated by important problems in macroeconomics. Not only did Sims study questions of statistical approximation in abstract environments, he showed how to apply the resulting apparatus to a variety of specific problems confronting applied researchers. The applications include seasonality in economic time series, aggregation over time, and approximation in formulating statistical models with economic underpinnings. Moreover, Sims' contributions to causality in time series and to the development of vector autoregressive methods were complemented by an important body of empirical research. Sims has served as an effective advocate and critic of the extensively used vector autoregressive statistical methods. Motivated by his own and related empirical research, Sims is one of the leaders in rethinking how monetary policy should be modeled and reconsidering the channels by which monetary policy influences economic aggregates. This interview with Chris Sims gives an opportunity to explore further the context of many of these contributions. Sims typically has a unique perspective on many economic problems, a perspective that is articulated in his answers to a variety of questions.

Type
MD INTERVIEW
Copyright
© 2004 Cambridge University Press

Hansen: In looking back at your time as a graduate student at Berkeley and Harvard in the mid sixties, what were the important influences that shaped your thinking about economics and econometrics?

Sims: Actually, I started taking graduate courses in statistics and econometrics when I was an undergraduate at Harvard. I was a math major as an undergraduate, and in my senior year, I started taking some economics. I took a graduate course in econometrics from Henk Houthakker, who later became my advisor; and I took a graduate statistics course from Dempster.

Both classes were influential, but by that time I already knew that I was interested in both economics and statistics. I did contemplate going to graduate school in mathematics, and I remember discussing that with my advisor early in my senior year, but in the end I decided to go to graduate school in economics. I went to Berkeley for one year in 1963, where I had first-year econometrics from Dale Jorgenson and first-year economic theory from Dan McFadden. I then moved to Harvard, not because I was discontented with Berkeley academically, but for personal reasons. At Harvard, I took some more economic theory, but I'm not sure I took econometrics at that point. I worked with Houthakker on my dissertation. I wrote on embodied technological progress, in which all previous models were posed in discrete time. Houthakker had just written a book on formulating models of consumption in continuous time, so he told me I should formulate my models in continuous time. Following this advice forced me to learn a lot of mathematics. Most importantly, he put me in contact with Chipman who was at Harvard at the time, and knew the relevant mathematics. All of this probably had some influence on the fact that I later wrote papers about approximation in continuous and discrete time.

Hansen: Your early research considered a variety of problems connected to statistical approximation. This work includes the study of discrete-time approximation of continuous-time models [Sims (1971)], the approximation of finite-parameter distributed-lag models to more general dynamic economic models [Sims (1972)], and the general problem of statistical approximation in rich or high (infinite) dimensional parameter spaces [Sims (1971)]. Much of this research predated related work in statistics and elsewhere. What was the original impetus for this work?

Sims: Some of the impetus for thinking about continuous- and discrete-time modeling was due to Houthakker. The vintage models I was working with easily let one express output as a function of the history of investment, but I needed to express productivity as a function of the history of output. This involved finding the inverse of linear operators whose kernels were nice functions. In discrete time, this is fairly straightforward, but in continuous time it leads to generalized functions. This was mathematically much more complicated than what Houthakker had done in his own work on consumption. I learned the technical tools that allowed me to address this and related approximation problems. The impetus for my work on approximation was then partly that I was technically ready to address these issues in approximation and partly that I was not very satisfied with the big gap between economic theory and econometric theory. Dynamic economic theory was often posed in continuous time and econometric theory presumed an econometrician was suppose to have a true model, written down in discrete time, about which nothing was unknown except parameter values.

Hansen: How were these papers originally received? They must have looked technically intimidating to many economists at the time.

Sims: Well I think at the time a lot of people didn't read them. So they didn't get intimidated. The paper (Sims 1971c) on continuous and discrete approximation was submitted to Econometica for consideration. The less sympathetic referee report claimed that everything done in the paper had already been done before. While Dale Jorgenson had previously discussed the rational approximation of lag distributions, the implied sense of approximation was too weak for statistical approximation. This issue had nothing to do with continuous- and discrete-time approximation, however. So, the referee hadn't even realized that there was a difference between approximation of a lag distribution and approximation of a continuous-time model by the estimated discrete-time model.

Since the work on infinite dimensional spaces was technically beyond what was appearing in economics journals, I sent Sims (1971d) to the Annals of Mathematical Statistics. After what, for an economics journal, was a relatively short time, the editor wrote: “Sorry it's taken so long. I had a hard time finding any referees. Here's a referee report.” The referee report said, “I really don't understand what this paper is about, but I've checked some of the theorems and they seem to be correct, so I guess we should publish it.”

At the time I don't think that many econometricans or economists read it. Tom Sargent was an exception. He read my papers on approximating continuous-time models and my Journal of the American Statistical Association paper [Sims (1974e)] on approximation of discrete-time distributed-lag models that use frequency-domain methods, and he became a promoter of them. Tom was, of course, an important reader, and his influence got the work some attention, but it's true that most economists found these methods hard to follow.

Hansen: Your first job was as an assistant professor at Harvard. What was it like being a junior faculty member there?

Sims: It was probably not that much different from being a junior faculty member almost anywhere. Harvard was certainly different from Minnesota where I moved to later, though. I actually contemplated leaving Harvard immediately for Minnesota, when I finished my Ph.D. The reason I didn't was that they announced, during the time when I was finishing my degree, that they were hiring Griliches and Jorgenson. I thought it would be interesting to overlap with them for a little while, and it was. But after two years there, I decided to move to Minnesota which was a much livelier place. There was a sense of intellectual excitement at Minnesota that I didn't have at Harvard at that time.

Hansen: I know that macroeconomists in the seventies, including Friedman, were intrigued by your paper: “Money, Income and Causality” [Sims (1972b)]. Was this the first of your applied papers to attract considerable interest? What type of reactions did it elicit from macroeconomists?

Sims: It is fair to say that this was the first of my macroeconomic papers that elicited considerable interest. There were two other things that I can think of that went before it that were applied. My paper [Sims (1969)] on double deflation of value added still does occasionally get cited by people. Index number theory is something that not many people today pay attention to. Every few years somebody thinks about it again, but there are not that many references in the down-to-earth application area. I also had a paper on evaluating Dutch macro-economic forecasts [Sims (1967)], which I think attracted very little attention. “Money, Income and Casuality” attracted a lot of interest because it came out in the peak of the monetarists-Keynesian controversy. A lot of macroeconomics research was centered on this controversy. I was a Harvard Ph.D. who had nothing to do with Chicago, writing a paper that seem to say that Friedman was right and all Keynesians were wrong. So, there was a lot of artillery brought to bear against the conclusions in my paper.

I had a conversation with Tobin when I presented the paper at Yale. He was skeptical, but not nearly as critical as a lot of other people were. He recognized that even if you accept money in the income regression as exogenous and interpret the regression equation as characterizing the response of the economy to the money stock, the estimated equation still implies that only a fairly small fraction of all output variation was explained by the money stock. What was true then and is still true now is that it's very hard to get evidence that monetary policy is as important as most people seem to think it is, and certainly as Friedman seemed to think it was, at the time, in generating business cycles. Tobin saw that this result really didn't undermine the view that there was a lot else going on in the economy and possibly a lot of other policies would be important.

The first time I talked to Fisher Black about it, he said this result is entirely spurious, and he was essentially right. He said that, by a Granger causality test, stock prices would appear to cause everything because stock prices are unpredictable. While I knew that and I agreed with him on that point, I argued that money is very different. Money stock, as Friedman would explain to us over and over again, is actually quite tightly controlled by the Federal Reserve System. So we have to think of its moving in response to deliberate action by policymakers and being nothing like an asset price. That was my answer to him at the time, but in fact it is not a good answer. Fisher Black was the only person who really saw this objection. Most of the criticisms were either from Keynesians who just didn't believe it and didn't trust the methodology, or from statisticians, and econometricians, who bridled at calling this test a test for causality.

Hansen: Let me follow up on two of the aspects of your answers. While the formulation of causal restrictions on time-series representations has proved to be of very considerable value, the term “causality” itself seemed to generate much controversy. Were the resulting dialogs productive or merely distracting?

Sims: They were mostly distracting. I still think “causality test” is good terminology for these tests. I wrote a paper [Sims (1977a)] that virtually nobody has read and understood. Some people have told me they have read it and couldn't understand any of it. This paper treats formally the semantics of causality, discussing the different ways it's been used. Most people think they understand intuitively what causality means and what it means to say that object x causes object y. I think it's also fair to say that most people would have a hard time explaining exactly what the precise meaning is. We actually use the term “cause” in a variety of different ways.

The term “causality” has been used over and over again. Granger and I used it as a recursive ordering amongst the things determining something. In fact, in engineering, causality was used in this way before Granger and I used the term. Causality has also been used to refer to one-sided distributed-lag relationships in which the right-hand-side variables are exogenous. Econometricians have argued that good econometrics was not just looking for correlations, it is looking for regression relationships in which right-hand-side variables were being conditioned on. In applied work, when people put variables on the right and on the left, there was always an implicit notion of a causal ordering involved in making those decisions. Yet, nobody was discussing formally what the connection was between a causal ordering and a statistically legitimate right-hand-side variable in a regression equation. Granger causality perfectly links these notions.

It's true that the intuitive causal orderings are not necessarily Granger causal orderings and vice versa. Fisher Black's insight was perfectly correct on that. He had an example in mind where a Granger causal ordering would not correspond to any intuitive causal ordering. But there are many cases, probably most cases in applied work that involve estimating a regression equation, where intuitive notions of a causal ordering correspond precisely to a Granger causal ordering. It would be better if people understood that. Because the first application of this idea was to a very controversial subject, there are a lot of people who think that the one thing they know about Granger causal orderings is that they don't have anything to do with causality. I think this is a big mistake.

Hansen: Let me return to the substantive component to your “Money, Income and Causality” paper. In comparing this contribution to your later work, there is an interesting evolution in thought. The endogeneity of money is emphasized in your subsequent empirical work, and you were one of the originators of what is now called the fiscal theory of the price level. Could you comment on this evolution, and how it was driven by empirical findings and changes in macroeconomic policymaking?

Sims: I realized at the beginning that a policy authority that systematically controlled the money stock would try to offset business-cycle fluctuations. This could create a situation where money would appear to be exogenous, but the relationship would have nothing to do with the causal relationship between money and the business cycle. I thought at first that that was very unlikely, partly because monetarists had conditioned us so well to accept the idea that the money stock was the relevant instrument for monetary policy. Monetarists argued this despite the fact that week to week it was hard to control the money supply, and despite the fact that the money supply wasn't directly controlled by the monetary authorities. Then one of my first students at Minnesota, Yash Mehra, who had learned about causality from me, decided to do causality testing on money demand equations [Mehra (1978)]. These equations had money on the left-hand side of the equation, interest rates and output on the right. To my surprise, he found that those equations passed tests for exogeneity of interest rates and output. This finding was qualitatively the opposite of what I had found in Sims (1972b). In a later paper, Sims (1980a), I followed up on this idea. I looked at systems, not just single equations, but systems with interest rates among the variables. I realized that, with interest rates in the system, money was quite predictable and that it was this predictable part of money that was most strongly associated with output. None of these findings fit the simple monetarist framework or its rational-expectations natural rate variant.

It is because of these findings that I also started thinking about what happens in an equilibrium model when monetary authorities smooth interest rates. It doesn't take very long fiddling with such models to realize that if the monetary authority is smoothing interest rates, all of a sudden Fisher Black is right. The money stock starts moving in line with asset prices. While strictly speaking, money will be statistically endogenous, it's likely to be very close to being causally prior in a Granger sense for the same generic reason that asset prices are. My view now is that it is likely in countries where interest rates are held fairly smooth and the monetary authority is not attempting to tightly control monetary aggregates that the Granger causality of monetary aggregates to other macroeconomic variables is not a true causal relationship.

I often say that the Phillips curve is not the best example of the Lucas critique. The best example of a spurious statistical relationship that we can discover from a rational expectations equilibrium model not to be usable as a mechanical policy trade-off is the regression of GDP on money.

Hansen: There have been a variety of papers devoted to theoretical underpinnings of the fiscal theory of the price level that you [Sims (1980a)], Mike Woodford, and others have been advocating. Have you found this work to be a useful elaboration and clarification?

Sims: Woodford and I were writing from different perspectives on this topic at about the same time. Woodford continued to write on the topic. Eric Leeper, who was a student of mine at Minnesota, worked out the local existence and uniqueness characterizations for a fiscal theory [Leeper (1991)]. John Cochrane helped explain the fallacy of thinking that the government budget constraint is no different from private budget constraints [Cochrane (2003)].This work elaborating, explaining, and examining underpinnings of the theory has been useful.

Now there have also been other papers on this topic that may be what you had in mind. These papers question whether the theory makes any sense at all. I've tried to understand what underlies those objections. My current view is that the strongest objections come from people who really have in mind a model unlike any of the standard models in use in macroeconomics today. In such a model the central bank and the treasury have separate budget constraints and we can contemplate them going bankrupt independently. Actually, I have some work underway now that discusses models with this separation [Sims (2000a)]. In the United States, they seem quite irrelevant, but they may be relevant in the European Union where the institutional setup makes it very clear that it's contemplated that treasuries can go bankrupt without the European Central Bank going bankrupt. It also appears that the European Central Bank could quite easily fail without the treasuries failing. In an environment like this, game theoretic notions come into play and you can get conclusions from the fiscal theory that do not follow from traditional monetarist theory by any means. I view this type of model really as an interesting elaboration of the fiscal theory.

But the critics who have taken this line—for example, McCallum (2001) and Buiter (2002)—have used intuitive notions that could only be backed up in a model with separate central bank and treasury budget constraints to criticize the theory as it works out in models with a unified government budget constraint. And the criticisms, when considered in a model with two government budget constraints, turn out in my view to be basically wrong-headed.

Another line of criticism was from people who argued that the notion of competitive equilibrium in FTPL (fiscal theory of the price level) models, unlike that in standard models, could not be embedded in a careful game-theoretic framework. Marco Bassetto's (2002) work was seen initially as supporting this view. In its final form, though, Bassetto's work pointed out that the incompleteness, from a game-theoretic viewpoint, of the specification of policy in FTPL models was no different from similar incompleteness in standard macroeconomic models. Furthermore, it is straightforward to resolve this incompleteness so that the FTPL equilibria emerge in exactly the form originally put forward under simple competitive notions of equilibrium.

Hansen: Let me change gears here a little bit. After you were an Assistant Professor at Harvard for a few years, you came to Minnesota in 1970. Tom Sargent and Neil Wallace were there at the time. This subsequently proved to be a rather influential group of young macroeconomists at the time. What was Minnesota like in those days?

Sims: It was an exciting place to be. Jack Kareken was important in recruiting Wallace and Sargent. Sargent helped to recruit me with a phone call. The process of Sargent developing his approach to teaching macro was great to watch and there were new ideas just bubbling up around the place. Sargent, Wallace, Kareken, and Meunch all had joint projects at various times related to monetary policy, partly stimulated by the Minneapolis Fed where these guys had part-time research appointments. I was teaching both econometrics and macroeconomics then, but the macroeconomics was on a one-quarter-a-year or sometimes one-quarter-every-other-year basis. Sargent's teaching put heavy emphasis on the value of empirical work with explicit stochastic models, so it created a demand for the teaching of econometrics. It was also clear that anybody who wanted to work with Sargent on a dissertation needed to know time-series econometrics. So it was a very good environment to be in, even though there were some differences among us, certainly political and some methodological. The atmosphere in the department then was as positive and mutually intellectually supporting as any place I've ever been.

Hansen: Your work on vector autoregressions (VAR's) has had an enormous impact on applied research in macroeconomics. Presumably this was due to both the tractability and the appeal of the method. While the appeal of VAR models is based in part on skepticism of the empirical validity of tightly parameterized models, shocks must still be identified through the use of theory. Has your thinking about this identification changed over time? As I recall, the research reported in your paper “Macroeconomics and Reality” [Sims (1980b)], used primarily a recursive identification scheme?

Sims: I actually considered two identified models in that paper. Some of the people who cite it seem to never have read it in any detail. I've often seen it cited as a reference for the viewpoint that conclusions can be drawn from unidentified models or that identification is impossible. The fact that there are actually two identified models in that paper is sometimes missed, but it's true they were recursive.

I'm still skeptical of tightly parameterized models. I think the most reliable way to do empirical research in macroeconomics is to use assumptions drawn from “theory,” which actually means intuition in most cases, as lightly as possible and still develop conclusions. Now of course there is not a one-dimensional ranking of theoretical restrictions for how light they are. So, this approach tends to lead to experimentation with different kinds of models and different restrictions, and essentially informally or formally averaging across the results. I thought that was the best way to do research when I wrote that paper and still do.

My thinking has changed in a few ways. First, I now better appreciate the importance, for getting people to use a model, that they be able to tell stories with the model. Even if you don't have a detailed identification scheme that provides a behavioral interpretation that you trust for every shock, it may be worthwhile to experiment with such schemes. People feel more comfortable if you can provide at least one story about what's going on inside the model so it doesn't look to them like a black box. And in part that's what led to my paper with Leeper [Leeper and Sims (1994)] called “Towards a Modern Macro Model Usable for Policy Analysis.”

The other change in perspective began when I did forecasting seriously for awhile. There were several years during which I was providing a fresh forecast every quarter. I discovered that to get a model that really fits I had to have quite an elaborate reduced-form setup that allowed for time-varying variances, nonnormal disturbances, and time-varying parameters. By the time this was all set up, the dimensionality of the disturbance vector in the model was extremely high; every coefficient required a separate disturbance. I felt that the whole setup was becoming unwieldy, and it was clearly higher dimension than necessary. Another motivation for the work with Leeper was the idea that by using a theoretical model with a relatively small number of parameters as a base, one might have a starting point for modeling time variation and nonstationarity in a manner that is not inherently so high dimensional. So those are the directions of the evolution of my thinking about VAR's.

Hansen: Often, structural VAR identification looks like Cowles Commission–style exclusion restrictions but applied to either the instantaneous response matrices or the long-run response matrices of multiple time series to economic shocks. Is this a fair characterization?

Sims: There are two versions of identification in VAR models that have been used with some frequency. One is a version in which you leave the lag coefficients unrestricted and restrict only the contemporaneous responses to the shock. Those restrictions by themselves would fit perfectly into a Cowles Commission setup. The important difference from Cowles-style restrictions is that, in the identified VAR setup, the structural disturbances are typically independent of, or at least orthogonal to, one another. This orthogonality is absent from the Cowles Commission framework.

My view is that this restriction is an advance over the Cowles Commission framework. People who use the Cowles Commission framework almost always back into making assumptions of orthogonality in structural disturbances anytime they really try to use the model to project effects of an intervention. If you have structural disturbances that are correlated, anytime you intervene and change the parameters of a structural equation in a model you have to ask yourself what was the source of the correlation and how should it be altered by the intervention. You always have the two extreme choices. One possibility is that the correlations reflect passive responses of the equation's disturbances to other disturbances. Changing the equation itself won't change the correlation structure of the rest of the disturbances. Or you can take the opposite view: To the extent that a money demand equation has residuals that are correlated with the money supply shock, this represents a causal impact of money supply decisions on money demand. Under this interpretation, you extract all the covariation from the other disturbances before you arrive at policy-invariant disturbances. There is no theory in the Cowles Commission approach for how you do this extraction. You have to take a stand on these issues if you are going to really use the model. This is the reason for the added structure in the VAR literature. In most applications, I think that it is the right way to go.

The second approach is to make restrictions on the long-run response matrices, but again to assume that the shocks are orthogonal. Restrictions on long-run response matrices are probably not as widespread because when they lead to over-identification, they can result in unwieldy computational problems. In contrast, you can handle overidentification in restrictions on the contemporaneous covariance matrices with much less computational difficulty.

There is another informal aspect to identification. Researchers will make some explicit restrictions and then look at the plausibility of the results. For instance, specifications in which responses to what are purported to be monetary policy shocks that are clearly ridiculous tend not to be reported. This informal aspect has bothered some people, including Uhlig (2001), Faust (1998), and others. They have explored what happens if you make these prior plausibility restrictions formal. With modern computational methods, this approach can be feasible. The result of these exercises is that the empirical findings are very robust. Faust doesn't explain his results that way, but my reading of his paper is essentially a finding of robustness.

In this VAR literature, you see a phenomenon that is not treated in econometrics texts. We almost always really have fewer reliable identifying restrictions than we need to identify the full set of parameters. We are always experimenting with a variety of identification schemes, all of which are hard to reject. We evaluate this identification partly on the basis of how well the resulting econometric model fits the data and partly on the basis of how much sense the identification makes.

Hansen: What do you see as being the important empirical insights that emerged from the VAR literature.

Sims: I think the most important ones have been the ones about sorting out endogeneity of monetary policy that I've already talked about a little bit. I think that literature has had a really major impact on the way people think about monetary policies. The basic dynamics of the estimates from the VAR's showing that the effect on output and prices of monetary policy shocks are quite smooth and slow are widely accepted now, even among policymakers. This pattern holds up under many different variations of a VAR specification.

Hansen: You have had a longstanding interest in Bayesian statistics and econometrics. Your research in Bayesian econometrics has targeted situations in which Bayesian and classical perspectives can lead to important differences in practice, as in Sims and Uhlig (1991). A leading example of this is research on unit roots. Is this a fair assessment, and are there other important examples?

Sims: Early on in my career, I didn't see that the difference between Bayesian and classical thinking was very important. So I didn't get involved in defending Bayesian viewpoints or get into arguments, because I thought that was irrelevant. Then, I noticed that it really made a difference in the unit-root literature. The construction of the likelihood function for an autoregression conditioned on the initial values of the time series proceeds in the same way whether or not nonstationarity is present. So, the form of inference implied by the likelihood principle should be the same for stationary and nonstationary cases. Classical distribution theory seems to imply that we must use very different procedures when we have an autoregression that may include a unit root.

The Bayesian perspective implies that any special character of inference in the presence of possible nonstationarity should arise from differing implications (in stationary and nonstationary cases) of conditioning on initial conditions and from the related fact that “flat” priors can imply bizarre beliefs about the behavior of observables. So when such differences arise in the way you handle models that are dynamic and might have a unit root, they should come from the imposition of a reasonable prior for use in scientific reporting, and that's a very different problem formally and intuitively from the unit root classical distribution theory.

Another example of when it makes a lot of difference whether you take a Bayesian or classical perspective is in testing for break points. When you are testing for one break point, both Bayesian and some non-Bayesian approaches will trace out the likelihood as a function of the break point (though non-Bayesians are more likely to trace out the maximized, and Bayesians the integrated, likelihood). The Bayesian, or likelihood principle, approach would tell you that in a change-point problem, the precision of your knowledge about the change point, given the sample, is determined by the shape of the likelihood you confront in the sample. Classical approaches can lose track of this point, by thinking about the distribution of the likelihood function over all possible samples, rather than focusing on the likelihood function that's in front of you.

Though there is relatively little Bayesian work on instrumental variables I think there could be more, and it might make a distinct contribution. Instrumental variable estimation is not likelihood-principle based, but it applies to models for which there may be a likelihood. Also, one can ask the question of what is good inference conditional on the moments that go into the instrumental variable estimate instead of conditional on the whole data set. I think one may be able to get conclusions there that provide a more solid foundation for the discussion of weak instruments, which is an important applied topic.

Hansen: As a researcher, you have been a great example of someone for whom methodological and empirical interests are intertwined. As economics and econometrics become more developed, there is an inevitable pull toward specialization. Econometric theory is becoming a separate field in many places. Is there a good reason to be concerned about econometrics becoming too specialized too quickly?

Sims: In all kinds of fields, including economics, there's a split between more abstract and more applied theorists, and between theory and empirical work in general. Within econometrics, there's a division between econometric theory and applied econometric work. It is important that people work on connecting these areas. There's an internal social dynamic that makes people respond more to work within their own specialty, and that can leave people who actually bridge specialties without firm constituencies in the profession. Moreover, there is value to having economists involved in policy issues, because that creates a pressure to connect theory and practice and to contribute to economic research explicitly connected to real-world problems.

So I agree that excessive separation of econometrics from the rest of economics is not a good thing, and that there is, at least in some places, momentum in that direction. There is an opposite danger, though: By insisting that only people who have strong credentials in a substantive area of research are real, or useful, econometricians, some departments have, in my view, created environments hostile to theoretical econometrics, and thereby also to rigorous thinking about empirical methodology. Communication between econometricians and non-econometrician economists is important, but this happens best when there are econometricians who are truly dedicated to their subject rubbing shoulders with substantively oriented economists. When the strong abstract econometrician and the substantive researcher happen to be the same person, that's great, but it's rare.

Hansen: I know that you have continual contact with research in federal reserve banks. What role do you see time-series econometrics playing in research that supports the formulation and implementation of monetary policy?

Sims: I wrote a paper [Sims (2002)] recently that is concerned in part with this issue. I argue there that econometricians have failed to confront the problems of inference that are central to macroeconomic policy modeling. The first serious policy models inspired, and then used, the Cowles methodology, but, as the models expanded to try to incorporate all the important sources of information about the economy, they reached a point where non-Bayesian approaches to inference ceased providing answers. The models had many equations, many predetermined variables, and relatively few observations. Two-stage least squares using all the available instruments simply reproduced, or nearly reproduced, OLS. Maximum likelihood estimators tended to be hard to compute, and then once computed tended to be often unreasonable, because they corresponded to isolated peaks. Use of small-sample distributions of estimators to form confidence intervals and tests was impossible at models of this scale, and the asymptotic theory clearly was unreliable because of the scant degrees of freedom.

Academic time-series modeling was focusing on unit roots and cointegration, suggesting hierarchical layers of statistical tests to pin down the cointegration structure before estimation. However, in very large models, carrying out such layers of tests is generally impractical.

Academic macroeconomic theorizing was focusing on rational expectations, which was not in itself a problem. But leading figures, such as Sargent and Lucas, associated rational expectations with the fallacious view that there is a fundamental distinction between analyzing a change in policy “rule” and analyzing a change in a policy variable. A change of policy rule can in fact be only consistently modeled as a particular, nonlinear sort of stochastic shock. The fallacious contrary view led to a generation of graduate students who believed that the bread and butter of quantitative policy analysis—making projections conditional on values of random variables that appear explicitly in a model—was somehow deeply mistaken or internally contradictory. The result was a long period with little or no academic interest in contributing to or criticizing the models actually used in making monetary policy.

The models are now in a sorry state, but we may be at the point where Bayesian methods and thinking can address these problems and begin to close the gap between academic macro and econometrics and the actual practice of quantitative policy modeling. Some recent papers by Smets and Wouters (e.g., 2002, 2003) are particularly promising along this line.

Hansen: You recently published a paper on “rational innattention” [Sims (2003a)] in which you apply results from information theory to build a model of sluggishness in decisionmaking. What led you to use this formalism, and where do you see this research headed?

Sims: I wrote a paper called “Stickiness” (Sims 1998b) a few years ago in which I set out to show that variations on standard theoretical assumptions about menu costs and inertia could match the qualitative behavior of the macro data. I noted, though, that the usual theoretical setups implied that either prices were sticky and real variables “jumpy,” or real variables were sticky and prices jumpy. The data show that both classes of variables are about equally inertial. Furthermore, any sort of adjustment cost formulation tends to imply not only that the variables subject to adjustment costs should respond slowly and smoothly to other variables, but also that they should have smooth time paths. The data show the slow and smooth cross-variable responses, but not the correspondingly smooth time paths. The stickiness paper showed how you could get both, but via a kind of hierarchical adjustment cost setup that seems hard to connect to data or even to economic intuition.

At the end of that paper is an appendix pointing out that there might be reason to think that inertia due to information-processing constraints, modeled using the notion of Shannon channel capacity, could account for the way the data behave in a more intuitively appealing way. The more recent paper you mention works out the application of the method to general linear-quadratic dynamic optimization problems, and shows that it does in fact account for the qualitative nature of observed inertia.

Few economists know any information theory, though many have told me they find the intuition behind the formalism appealing. For the time being, these ideas are propagating slowly because there are few people able to actually advance the formal frontier. I'm working on the area myself, trying to construct easily used software that will let these methods be applied more widely. The rational inattention setup implies that people will behave as if they face signal extraction problems even when there are no external costs to obtaining precise information. This should encourage more attention to models with imperfectly informed agents, and in fact has already done so to some extent [e.g., Woodford (2001)], even before models that ground the form of the signal extraction problems in information theory are available.

References

Bassetto M. 2002 A game-theoretic view of the fiscal theory of the price level Econometrica 70, 21672195.Google Scholar
Buiter W.H. 2002 The fiscal theory of the price level: A critique Economic Journal 112, 459480.Google Scholar
Cochrane J.H. 2003 Money as Stock. Discussion paper, University of Chicago, GSB, http:/gsbwww. uchicago.edu/fac/john.cochrane/.
Faust J. 1998 The robustness of identified VAR conclusions about money Journal of Monetary Economics 49, 207244.Google Scholar
Leeper E.M. 1991 Equilibria under active and passive monetary and fiscal policies Journal of Monetary Economics 27, 129147.Google Scholar
Leeper E.M. & C.A. Sims 1994 Toward a modern macroeconomic model usable for policy analysis. NBER Macroeconomics Annual 81117.Google Scholar
McCallum B.T. 2001 Indeterminacy, bubbles, and the fiscal theory of price level determination Journal of Monetary Economics 47, 1930.Google Scholar
Mehra Y.P. 1978 Is money exogenous in money-demand equations Journal of Political Economy 86 (2, p. 1), 211228.Google Scholar
Sims C.A. 1967 Evaluating short-term macroeconomic forecasts: The Dutch performance Review of Economics and Statistics 49, 225236.Google Scholar
Sims C.A. 1969 A theoretical basis for double-deflation of value added Review of Economics and Statistics 51, 470471.Google Scholar
Sims C.A. 1971a Discrete approximation to continuous time distributed lags in econometrics Econometrica 39, 545563.Google Scholar
Sims C.A. 1971b Distributed lag estimation when the parameter-space is explicitly infinite-dimensional Annals of Mathematical Statistics 42, 16221636.Google Scholar
Sims C.A. 1972a Money, income, and causality American Economic Review 62, 540552.Google Scholar
Sims C.A. 1972b The role of approximate prior restrictions in distributed lag estimation Journal of the American Statistical Association 67 (337), 169175.Google Scholar
Sims C.A. 1974 Seasonality in regression Journal of the American Statistical Association 69 (347), 618626.Google Scholar
Sims C.A. 1977 Exogeneity and causal orderings in macroeconomic models. In C.A. Sims (ed.), New Methods in Business Cycle Research, pp. 2343. Federal Reserve Bank of Minneapolis.
Sims C.A. 1980a Comparison of interwar and postwar business cycles: Monetarism reconsidered American Economic Review 70, 250257.Google Scholar
Sims C.A. 1980b Macroeconomics and reality Econometrica 48, 148.Google Scholar
Sims C.A. 1998 Stickiness Carnegie-Rochester Conference Series on Public Policy 49, 317356.Google Scholar
Sims C.A. 2000 Fiscal Aspects of Central Bank Independence. Technical report, Princeton University.
Sims C.A. 2002 The role of models and probabilities in the monetary policy process Brookings Papers on Economic Activity 2002 (2), 162.Google Scholar
Sims C.A. 2003 Implications of rational inattention Journal of Monetary Economics 50, 665690.Google Scholar
Sims C.A. & H.D. Uhlig 1991 Understanding unit rooters: A helicopter tour Econometrica 59, 15911599.Google Scholar
Smets F. & R. Wouters 2002 An Estimated Stochastic Dynamic General Equilibrium Model of the Euro Area. Working paper, European Central Bank and National Bank of Belgium.
Smets F. & R. Wouters 2003 Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach. Discussion paper, European Central Bank and National Bank of Belgium.
Uhlig H. 2001 What Are the Effects of Monetary Policy on Output? Results from an Agnostic Identification Procedure. Discussion paper, Humboldt University, Berlin.
Woodford M. 2001 Imperfect Common Knowledge and the Effects of Monetary Policy. Discussion paper, Princeton University