1. Introduction
Although empirical macromodels date back to Tinbergen (1936) (first for the Netherlands and subsequently for the US and UK) and Klein (1950), and NIESR in 1969 and the Treasury in 1970 with small models (Reference SurreySurrey, 1971), they were only firmly established in the UK in the early 1970s with the work at the London Business School (LBS), the NIESR, the UK Treasury (HMT) and the Bank of England (BoE) and elsewhere. These were medium sized, (300 equations or so) quarterly, econometric models, largely estimated by single-equation methods and set around the national accounts. They were used both for forecasting and policy simulations. Although they became influential in policy circles, most proved deficient in accounting for the turbulent economic changes that came to characterise the 1970s with its succession of oil price hikes by OPEC and periods of considerable industrial unrest. The general reaction to this perceived failure was a general scepticism about them, both for forecasting and for policy simulations culminating in the heavyweight criticisms of Lucas (1975) and Reference SimsSims (1980). The result was a move to a more theory-based form of modelling in response to the Lucas Critique (later described as Dynamic Stochastic General Equilibrium (DSGE) models) and a much closer attention to whole model (maximum-likelihood) estimation and the importance of adequate identification in response to Sims.
2. The 1970s
In the early part of the decade the main use of the NIESR economic model was in producing its forecast, although it was also used for policy and other simulations. It was seen by many as a more transparent version of the Treasury forecast. It was sometimes referred to as “The Treasury in Exile”. It is fair to say that, in this period, relatively little attention was paid to the model's underlying theoretical basis or its econometric methods though this trait was fairly typical among UK modellers at the time. The NIESR had the reputation, justly, of being Keynesian (‘Old Keynesian’ that is) in that it emphasised aggregate demand management, was sceptical of Monetarism, be it the monetarism of Milton Friedman or the influential variants developed in the UK at the LBS, Manchester, City and Liverpool universities. In practice this meant the NIESR broadly favoured policy interventionism and was also noted for its regular support for incomes policy as the primary anti-inflation tool.Footnote 1
This was a decade of very considerable economic and social upheaval brought about initially by the OPEC oil price hikes and exacerbated by long-lasting strike action as the government attempted to reduce inflation by using wage freezes in what was a high inflation and unemployment (stagflationary) period, resulting in the imposition of a three-day week. After a change of government in 1976, and in the face of a soaring current account deficit, the Treasury applied to the IMF for a loan (bailout). After this temporary respite, the end of the decade saw a return of widespread strikes and the “Winter of Discontent”. The finding of research at the NIESR that any effects of an incomes policy were purely temporary with a ‘catch-up’ following the period of imposition of pay restraint appeared to be borne out over this episode (Reference Henry, Fallick and ElliottHenry, 1981).
Over much of this period the macro team at the NIESR was funded by a series of special grants from the SSRC, which became the ESRC in 1983. Towards the end of the decade, following an ESRC recommendation, the NIESR appointed an applied macro economist (S.G.B. Henry) to lead its macroeconomics research programme. This was the beginning of a move by the ESRC to remove funding explicitly for forecasting and to concentrate the funds on research. There was a perceptible need for improvement in the theoretical as well as empirical research. Increased attention to theory was initiated by Friedman, and separately Phelps, both of whom challenged the lack of a theoretical basis to the Phillips Curve and this example proved to be very influential in the profession in stimulating the growth in research on economic theory (Reference FriedmanFriedman, 1968; Reference Phelps, Archibald and AlchianPhelps, 1970), a stimulus which Lucas was later to generalise. As regards econometrics, like many other bodies the approach to empirical work at the NIESR at this stage relied mainly on single equation estimation, heavily influenced by the ‘general-to specific’ econometrics recommended by Reference HendryHendry (1995). Expectations formation was generally of the backward-looking adaptive expectations variety. In addition large sections of the models were treated as exogenous, e.g. the exchange rate and tax rates and government expenditure, which inevitably meant that the models did not have a well-defined equilibrium.
In spite of these limitations, the record shows that the NIESR's use of a positive empirical methodology to evaluate critically economic policy initiatives started around this time. One issue it addressed was the endemic problem of ‘structural instability’ in empirical macroeconomic models. On this issue, research was directed at the question of why different researchers had widely different empirical results for similar equations and over similar samples. The results of this research appeared in its ‘Systematic Econometric Comparisons’ project which had financial support from the Treasury and which produced research papers evaluating rival UK wage models (Reference HenryHenry, 1984), and UK imports equations (Reference BrooksBrooks, 1981). Later, similar tests in this vein of research were undertaken at the Warwick Macroeconomic Bureau, which lasted from 1983 to 1999.
3. The 1980s
3.1 The background
The 1980s also proved to be turbulent, though in different ways from the previous decade. One benign change was the rapid and large improvement in computer hardware and software affecting all modelling teams in much the same way. (Important innovations to computing software for macro modelling made at the NIESR are described later). The changes were rapid. In the late 1970s the NIESR had a computer room housing “several ladies” with hand calculating machines who would do the formal data analysis.Footnote 2 At the start of the 1980s computing still used a large box of printed cards at a computer bureau to generate its forecasts and simulations but, by 1984, had an operational PC network as did most other institutes.
But what was demanding was that the decade was one of very considerable challenges for the profession at large and the modelling community in particular. In modelling, major extensions to theory, already underway in the mid 1970s, became much more urgent. At the same time, econometric practice changed out of all recognition as we describe below. Lastly, a new government had just been elected in 1979 with a radical market-oriented economic agenda. We amplify on these three things next.
3.2. The rise of theory
Theoretical arguments made in two major contributions now became pressing; the Lucas paper, ‘Econometric policy evaluation; a critique’, in 1976, together with the extensions to this made by Reference Kydland and PrescottKydland and Prescott (1977), and the later contribution by Sims in 1980 in his ‘Macroeconomics and reality’ The reasons for this pressure was that the Critique, in particular, was quickly accepted by academic economists even though many leading econometricians argued it had little empirical support (see Reference Ericsson and IronsEricsson and Irons, 1995, and Reference Hendry and MizonHendry and Mizon, 2010). The reasons for this near pervasive acceptance of the Critique was the result of several things: the exaggerated claims made for econometric models, their poor forecasting record especially at turning points and the continuing diversity of empirical findings in the academic literature. But it was also the case that the argument made by Lucas and others, in favouring much greater reliance on microeconomic theoretical underpinning for macro policy models, resonated so quickly with large swathes of the economics profession, many of whom found econometrics difficult, applied econometrics overly time-consuming and its results difficult to publish in leading journals because of a lack of perceived generality. The call to avoid all these difficulties and concentrate instead on theory, possibly using calibration using ‘stylised facts’ or moment matching for model evaluation, was a very appealing alternative. Sims’ argument, though it was also very influential, did not appear to have the extensive effects that Lucas did, possibly as it was directed at improvement in econometric practice, a narrower and, relatively, a shrinking field as just noted. Sims criticisms of then current econometric practice was that econometric models at the time applied large numbers of identifying restrictions which had very little justification and which were highly unlikely to be valid. The suggestion by Sims to work with essentially ‘atheoretical’ empirical VAR's was not an appealing way forward at the time as the objective of the NIESR's model was to give a structural account of the economy and this was not possible within this framework. Subsequently, with the development of cointegration into a multi equation setting, the VAR approach converged with the Hendry dynamic modelling approach of cointegrated VAR's which could be given, at least, a long-run structural interpretation. These two critiques played such an important part in the subsequent evolution in macropolicy modelling, they need to be spelt out more fully.
First, the Critique. There were two parts to this. The first part was a deconstruction of the assumptions underlying discretionary monetary and fiscal policy as applied in the 1970s (by Keynesians); arguing that current policy practice assuming that a macropolicy change could be treated as an ‘exogenous’ change was incorrect. It could not be treated as exogenous as the effects of the policy change would soon become clear to the public at large who would incorporate it into their own economic plans. In this way, the structure of the ‘endogenous’ part of the model would change. Policy simulations based on the assumption that the endogenous part of the model would remain constant when policy was changed would thus be incorrect. The second part of the Critique was its recommendation that, as microeconomic relations were (allegedly) more stable than macro ones as they were based on well understood constrained optimising behaviour by agents, a macro relation derived by aggregating such micro theoretical equations could be expected to be more stable. Such ‘micro-founded’ theoretical forms for production and consumption functions for example, otherwise referred to as ‘fundamentals’ (equations that were unchanging as policy changed) were rapidly taken up in a wide spectrum of DSGE models, ranging from RBC models at one end of the spectrum to NKPM models at the other. Unlike the first part of the Critique, this second part attracted less criticism on empirical grounds than did the first. It was only later that both methodological and econometric criticisms on this second part surfaced.
The argument on methodology is straightforward; in an observational subject such as economics (as in astrophysics), theoretical findings are evaluated by empirical testing. However appealing an initial hypothesis may appear to be, if it fails to conform with empirical evidence when tested, it needs to be rejected and an alternative sought. This was the method Johannes Kepler used in the 17th Century when attempting to fit circles to the planetary data accumulated by Tycho Brahe possibly because, as Kepler was a religious man, circles were considered to be ‘perfect’ geometries and, thus, more Godlike. But extended experimentation led him to confirm that planetary motion was not circular but was actually elliptical and his famous three ‘Laws’ implied by this finding followed.Footnote 3 Lucas's treatment of theory actually reverses this methodology and, in the process, rejects the need for the empirical verification of any postulated hypothesis. His treatment was the basis for the move at this time not just towards more theory-based models but towards the denigration of econometric results as such.
The econometric argument was (is) more elaborate and is closely bound up with developments in cointegration. We therefore postpone its discussion to a later section of the paper (section 4). Before that, some illustrations of the NIESR's use of constructively critical empirical tests of policy at this time are briefly described.
3.3. The Medium Term Financial Strategy (MTFS)Footnote 4
The MTFS introduced in Geoffrey Howe's 1981 budget was a source of unusual unity within the economics profession when a large section of it (364) signed a critical letter against its introduction of fiscal austerity at a time when unemployment was high. The NIESR consistently opposed and criticised the MTFS from its very inception. The other facet of the MTFS was its two-part model claiming exploitable links in turn between fiscal contraction and the money supply (M4) and M4 growth and inflation, each of which were demonstrably structurally unstable (see Reference Cuthbertson, Henry, Mayes and SavageCuthbertson et al., 1980). When implemented, it failed to achieve its objectives of using fiscal contraction to reduce the money supply and, by these means, reduce inflation. These key assumptions of the MTFS proved unreliable, much as predicted in Cuthbertson (op. cit.); the supposed monetary growth–inflation link, in particular, was embarrassingly in error as money growth increased at the same time that inflation fell, making it well-nigh impossible for the Treasury to claim the MTFS was working as planned. Unsurprisingly, it was dropped soon afterwards. The economy grew and inflation fell in this decade nonetheless. In retrospect, it was clear that the government was ‘bailed out’ of the negative economic effects of the MTFS on growth by North sea oil coming fully on stream in the early 1980s, with its boost to growth and fiscal revenues, as well as the effects of the early stages of financial liberalisation in stimulating consumer borrowing, consumption and growth with the ‘overshooting’ exchange rate pressing down on inflation. In spite of this, the MTFS has been credited with rebutting the Keynesian approach to macro policy, replacing it with a strategy that was both monetarist and fiscalist. This was a classic example of selective use of evidence. Those who argue that the MTFS led to increased growth, better fiscal ratios and lower inflation ignore the (largely unanticipated) short-term effects of financial liberalisation,Footnote 5 North Sea oil and the ‘overshooting exchange rate’ which were in fact largely responsible. This example of selective treatment of evidence is widespread. Indeed it is clearly in evidence much earlier in the application by Lucas and Sargent (1978) which is described next.
3.4. Did the Lucas Critique explain the 1970s ‘stagflation’?
The Critique used a partial treatment of the problem of evaluating the effects of policy change in its analysis – this leading to selective (i.e, inadequate and so misleading) inferences about the causes of economic cycles. The paper by Lucas and Sargent was published soon after the Critique. Lucas and Sargent (1978) claimed that the sharp change in the economic performance of the US in the first half of the 1970s, when it had its first serious recession since the 1930s coupled with inflation around 10 per cent per annum, hence stagflation, was due to the failings of Keynesian beliefs about the effectiveness of macroeconomic policy intervention. In support, Lucas and Sargent pointed to the fiscal deficit at the time which was “massive” so, they assumed, fiscal policy was “loose”. As monetary policy was also expansionary, they concluded that the recession was due to the failure of the “modern Keynesian doctrine” which had predicted “rapid real growth and low unemployment” as a result of this stimulus. Apart from the obvious issue that the increase in the fiscal stimulus was, at least partially if not largely, due to the cyclical downturn, so was not a measure of a discretionary fiscal policy change, it ignores the possibility that the 1970s recession and the increase in the fiscal deficit and inflation was most probably caused by the first and second oil price hikes following the oil embargo imposed by OPEC. In their later influential study, Bruno and Sachs (1986) argued that fuel inputs were very significant in production and changes in the relative price of oil from 1973 played a major part in the worldwide recession.Footnote 6 This poses two distinct challenges to the Lucas and Sargent story: first that it was not policy changes that caused the recession, but changes in the relative price of a major input into production and, second, that it was a worldwide effect so US stagflation was not an isolated problem.
The empirical failings of the approach used by Lucas and Sargent arise from its use of an arbitrary choice of driving variables to explain the variables of interest. From an econometric viewpoint, there are mis-specification, identification, data coherence and causality problems in this treatment sufficient to render its results meaningless. Put another way, had the authors been much more rigorous and actually tested their claims about the role of discretionary policy in a model of US growth and inflation with proper measures of the fiscal and monetary policy impulses over the period, together with additional domestic and external variables, potentially having effects on growth and inflation, then it is very likely their conclusions would have been overturned (see Reference HenryHenry, 2018).
3.5. Modelling expectations.
3.5.1 Implementing explicit REH expectations formation in large models
Although generally sceptical of the REH on the grounds of its extreme informational requirements, it was our view that to criticise the advocates of the New Classical Macroeconomics (NCM) effectively it was necessary to have considerable expertise in the techniques of applying the REH so that our arguments were made from a position of competence. Second, the recession which followed the introduction of the MTFS had a very unusual form; none of the usual components of aggregate demand (consumption investment trade etc) seemed to account for the recession. Instead the cause was clearly seen to be the behaviour of stockbuilding which fell sharply. The question then was why, in the absence of any fall in demand or sales, did stock levels suddenly collapse. One possibility was that it could be due to forward-looking expectations. According to this, when the MTFS was announced, firms expected a recession in the future and hence they began to run down their stock levels in anticipation of this and so actually exacerbated the recession itself (see Reference Hall, Henry and Wren-LewisHall, Henry and Wren-Lewis, 1987).
A number of institutions implemented the REH at about this time or earlier. The NIESR's first application was in the early 1980s with a single equation using REH applied to UK wages by Reference Henry, Wren-Lewis, Malgrange and MuetHenry and Wren-Lewis (1984). The next step was to extend such applications fully to a large model such as the NIESR one. As rational expectations are the same as the prediction of the relevant economic model, in a large model context this means that any expectations term should be replaced with the model's own forecast of that variable. In such a model it is not possible to solve for a variable, Y, in period t until the t+1 value is known. The first large model to implement the REH in this way was Reference FairFair (1979) which implemented the Fair-Taylor Algorithm. Subsequently Reference HallHall (1985a,Reference Hallb) proposed the stacked algorithm which is generally used now for the solution of non-linear models with RE. In this, instead of treating each of n equations at one point in time, it treated the model as nxt equations for the period 1 to t, and it simply solved the whole set of equations as a simultaneous set. By the late 1980s and early 1990s, RE had been incorporated into many of the large policy models included the LBS, NIESR, HM Treasury, McKibben Sachs, FED, IMF and NiGEM. The NIESR models at this time could however be distinguished from most of the others by two features; first expectations were implemented in a general way affecting most sectors of the economy including consumption, investment (see Hall and Henry, 1991, for an overview). It also allowed for longer forward leads in the solutions than was common practice elsewhere. Apart from their computing technicalities, these models need to have other refinements in order to function. One was to ensure the model had a unique solution to ensure an RE solution could be defined. This meant that the treatment of such sectors as the exchange rate, tax rates and monetary policy as exogenous was no longer possible. Hence monetary policy was set by using a simple rule, such as the Taylor rule,Footnote 7 and tax rates set to prevent the build-up of explosive fiscal deficits. Finally, for rational solutions with a non-linear model, setting terminal conditions becomes an important problem. There are various options; use a known fixed value, a constant level i.e Y at T+i is set to Y at T, use a constant growth rate for Y, or finally use its equilibrium value. The sensitivity of the solution to these conditions may be tested by extending the terminal date until there is no effect on the initial part of the solution.
3.5.2. Extending the applications of the REH
Developments in implementing RE solutions enabled much more complex applications to be tackled. These included Optimal Control exercises, Stochastic simulations and Game-Theoretic solutions, which were possible with REH versions of the macro model. For example in Reference HallHall (1987) and Hall and Henry(1988) a set of time inconsistent solutions for optimal fiscal policy are contrasted with a time consistent solution. This exercise involved both optimal control of a rational expectations model and the development of an algorithm to solve such non-linear models for the time consistent solution.
3.5.3. Boundedly rational learning
While considerable strides had been made in implementing RE, experience with these models very quickly made it clear that this very extreme assumption is far from reality. The most obvious illustration of this is the well-known rational expectations ‘jumps’ which occur at the start of any RE solution, well documented in the classic Dornbusch model of the overshooting exchange rate. These jumps simply do not happen in the real world, even in very efficient and fast moving areas such as foreign exchange markets. Obviously an alternative expectations formation process appeared necessary to avoid this extreme behaviour. This was clearly offered by the literature on boundedly rational learning. (Early papers by Reference Bray and KrepsBray and Kreps, 1984, and Reference Bray and SavinBray and Savin, 1986, established the theoretical framework for this version of expectations formation.) The idea is fairly simple; agents are assumed to use a simple rule to form expectations (it may be the full reduced form equation of a model but with unknown parameters or it may be some subset of this complete rule). But the agent does not know the true parameters of this rule and hence she will make mistakes in forming expectation. Over time as she observes each mistake she will revise the parameters of the rule and gradually come to ‘learn’ the true parameters and to begin to act rationally. This early work was refined by Reference EvansEvans (1985, Reference Evans1986) and subsequently Reference Marcet and SargentMarcet and Sargent (1989a,Reference Marcet and Sargentb). This was first applied to a large econometric model by Reference Hall, Garratt, Currie, Honkapohja and IngbergHall, Garratt and Currie (1993) and Reference Hall and GarrattHall and Garratt (1997). The practical implementation of this was done in the large model context using a Kalman filter time-varying parameter model. This approach to expectations formation gives much more plausible solutions both in forecasting and simulation as the sudden RE jumps are eliminated. It also allows many scenarios to be investigated where we do not want to assume that agents are perfectly rational with a full knowledge of the economy.
3.5.4. Expectations based on survey data
There are regular surveys in many countries which ask firms what their expectations are for sales, output and prices, their own or aggregates for the country. The responses are ‘qualitative’; the variable in question is expected to go up, down or remain the same (U, D, or S respectively). To convert these responses to a ‘quantitative’ series it is normally assumed that the individual responses are drawings from some sort of probability distribution. This turns out to be easier, though not straightforward, for aggregate variables where the assumption that the distribution is Normal can be sometimes be invoked. The most familiar applications using aggregate survey data on inflation in this way were to estimate Phillips curves (see Reference Carlson and ParkinCarlson and Parkin, 1975).
In a more recent contribution, Simon Wren-Lewis confronted the more difficult issue of deriving individual (e.g. firm level) qualitative into quantitative data. For this the assumption that the Probability distribution is standard is unlikely to hold; Wren-Lewis uses three–the normal, sech squared and uniform (rectangular) – and compares the implications of each for price and output expectations in the UK manufacturing sector. Interestingly there is a surprising degree of uniformity across the different assumptions which show that output expectations were overestimated for 1976–8 and that price expectations underestimated price volatility.
4. Forecast and simulation uncertainty
The Bank of England's recent treatment of forecast uncertainty (the well known ‘rivers of blood’ figures) starting from 1996 is often hailed as an important development in understanding forecast uncertainty. But the NIESR was conducting very similar exercises much earlier and arguably with a firmer analytical basis throughout the 1980s. These took two forms, the one closest to the Bank's analysis was a series of sections reported in the forecast write-up approximately every two years which analysed the historical forecast accuracy of the NIESR's own forecasts. But there were also important developments in the formal analysis of stochastic simulations with large models (See Reference HallHall, 1986, and references therein) which developed algorithms for calculating the confidence bands for forecasts using a non-linear model based on both the uncertainty stemming from the stochastic errors in the economic model and from uncertain parameters. This work also developed algorithms for calculating the uncertainty of a simulation or scenario analysis. In this case the standard errors are much smaller as most of the uncertainty in a forecast comes from the stochastic error terms which contain unforecastable shocks which hit the economy. Only a small part of the total forecast error comes from the stochastic parameters and the non-linearity in the model. The importance of this is significant in scenario or policy analysis because, in this sort of exercise, the effects from the uncertain error terms washes out. Intuitively this happens because the same shocks will hit both the baseline solution and that of the alternative scenario. The uncertainty of the simulation is due solely to the uncertainty in the parameters and their interaction with the non-linearity in the model. This is typically much smaller than the total forecast uncertainty.
Though this may appear to be a mere technical issue, it has considerable practical policy relevance. The current Brexit debate is a case in point. Both the BoE and the Treasury have analysed the effects of Brexit and find it is seriously negative. However this has been widely dismissed on the grounds that the forecasts made by both institutions are not very accurate (‘Project Fear’ as it has been inaccurately labelled). But the BOE and Treasury exercises each use scenario analyses. Hence, though the baseline forecast may be uncertain, the comparative effect of Brexit is much more certain.
5. The paramount importance of nonstationarity in economic models
Aggregate time series data is invariably nonstationary; with unit roots and intermittent unanticipated structural breaks. This poses not just major econometric issues in identifying these characteristics in samples of data, but evidence of structural breaks are often the sign of regime changes in the economy, with potential for major challenges to policy formation. Research at the NIESR played an important part in the applications both of the econometric methods needed when confronted with stationarities and in applications to the policy problems this led to. The implications this has for economic policy models is still not fully appreciated. Indeed its significance is almost completely ignored in prevalent DSGE models, including NKPM ones. Merely detrending non-stationary data as is commonly done with DSGE models, is not sufficient. This practice does not identify what forms of nonstationarity are present, blurring sources of changes that are present in the economy, and prejudges the nature of the long-run behaviour of the economy.
5.1. The treatment on nonstationarity started early in the NIESR
Reference HallHall and Brookes (1986, but originally written in 1983)Footnote 8 was an important development in the analysis of non-stationary time series data. The objective of the paper was the analysis of the long-run behaviour of UK prices. Its interest however goes far beyond this application as it proposed looking at the long-run behaviour of the variables using a static regression prior to running the complete dynamic model, and was a precursor to the Engle and Granger two step estimation procedure. A subsequent paper by Reference HallHall (1989),Footnote 9 was also the first to publish an example of the reduced rank, maximum likelihood procedure developed by Johansen using the same wage example as in Reference HallHall (1986a). A great deal of the subsequent econometric work at the NIESR provided a wealth of applications of the new approach of cointegration (see Reference HallHall and Henry, 1987, and Davidson and Hall, 1990, amongst others).
5.2. Shortcomings of the DSGE
The many developments detailed above illustrate an approach to modelling which was based on the adoption and development of new ideas and techniques such as RE and cointegration, but most central to the approach we had been developing is the notion of data coherency and testing the theories we were implementing. In our view a model which cannot explain the data has no real value. A model must be refutable to allow us to progress to better models. Unfortunately the past 25 years has seen a serious departure by much of the profession away from these empirically validated models to something which is both simpler and without any real empirical foundation. In contrast, methods such as moment matching, ‘stylised facts’ or calibration will not capture these important data changes and still less will they be able to assess their economic importance. Even when DSGE models are estimated using Bayesian techniques (as in Reference Smets and WoutersSmets and Wouters, 2003), the value of this is questionable. The equations are so poorly specified and so far from being congruent with the data that any orthodox estimation would give nonsense parameters. The Bayesian priors are necessary to keep the parameters close to something which is theoretically acceptable.
In the meantime large macro models had been steadily evolving, using a flexible mixture of theory and evidence, with RE or sometimes learning options. They also had supply sides and full model closure. Early examples include The McGibben-Sachs model, the ECB new multicountry model, the EU Quest model and the NIESR NiGEM.
In contrast, after its independence in 1997, the Bank of England (BoE) moved to a form of DSGE model (BEQM) in 2003/4 to inform its forecasts and its policy simulations. This had major issues both in its estimation and its coverage, so we will now outline how its adoption of this general form of – largely non econometric – model played an important part in the BoE's policy failings.
5.3. Where the BoE went wrong
So, though it is now evident that there were policy co-ordination failures in the Tripartite arrangements in the crisis, other things went badly wrong too, most clearly in the BoE's mishandling of the run on Northern Rock in 2007, when the Governor refused to undertake extra liquidity support for the financial system as requested by the Chancellor, seemingly not appreciating the systemic risks inaction posed. Such problems were not confined to this episode; the rapid increase in the fragility of the financial system during the first half of the decade, for example, was sufficiently in evidence for the BoE's own financial stability section to voice major concerns about the dangers of systemic risk in 2006 (Financial Stability Report, 2006).
The authority's difficulties in responding to the major macroeconomic problems of the 2000s can be traced to a largely uncritical acceptance of key assumptions of the Dynamic Stochastic General Equilibrium (DSGE) paradigm, amongst which are its assumptions that there are no banks, that expectations of all agents are formed rationally with no informational asymmetries, that markets clear and that balanced growth equilibria prevail. In addition it has no explicit treatment of the fiscal side of the economy, so cannot address the thorny and very central question of monetary and fiscal co-ordination. It led to the BoE's misplaced belief that its policy actions were largely responsible for the success of the ‘Great Moderation’ of the 1990s and the early 2000s. It also underpinned the views that the rapid rise in house price inflation was a ‘bubble’ best treated by ‘benign neglect’; that the increases in consumption over the period were not a matter of particular concern and were not significantly affected by the house price changes just noted and, lastly, that the huge expansion of bank balance sheets and increases in the volume of credit then underway were not a concern for monetary policy but could only be dealt with by improvements in regulation. The BoE's belief in the validity of these propositions meant that monetary policy decisions in the UK in the period up to and including the crisis made no allowance for two crucial things: one being the possible effects of globalisation on non-increasing inflation equilibria in the UK and the other the effects of rapid and extensive financial liberalisation on the behaviour and, consequently, the financial fragility of the economy.
6. Conclusions
This paper has described how the NIESR has pursued rigorous econometric testing applied to a set of macroeconomic problems over the past two to three decades. The essence of this research programme was that its large macroeconomic model was constantly evolving in the light of new policy problems and innovations to econometric practice. The DSGE movement denied that this could be worthwhile; theory based on a-priori reasoning was the only valid method and this has been widely accepted in the profession. The acid test of the DSGE came with the financial crisis in 2008–9 which it completely failed to anticipate or even have an explanation for ex-post.
Nearer to home, in a critique of existing macro models in the UK, BoE economist Reference WhitleyWhitley (1997) included a set of ‘failings’ in large-scale macro models These included their adopted ‘one size fits all’ approach to modelling, had non- vertical Phillips Curves in the long-run, were prone to forecast failure, especially in the aftermath of the oil price shocks in the 1970s, suggesting an over emphasis on the income (demand) side with little allowance for the supply side and did not allow for uncertainty. It suffices here to note that these judgements were each either incorrect or misleading. It lumps together examples from the early 1970s with the 1980s and even the 1990s as if this conglomerate represented a single ‘representative model’. Moreover, models at that time ranged pretty much across the complete range from Keynesian to RBC ones, so it was nonsense to treat them as sharing a common pedigree as well as being very uniform. For the record, serious work on the supply side appeared in early work in the City University Business School, the LBS and the NIESR model. Similarly, as described earlier, there were far reaching innovations in expectations modelling in the Liverpool and the NIESR model as well as pioneering work at the LBS on an integrated treatment of general equilibrium asset behaviour in a full-information model with rational expectations. Model consistent expectation formation was routine in dynamic simulations by the latter part of the 1980s, something that the BoE modellers of BEQM found it difficult to do. Locating models that fit the data and are theoretically coherent remains the purpose of applied macroeconomic modelling at the Institute.