1. Introduction
Occasionally-binding constraints, such as borrowing limits and the lower bound on nominal interest rates, introduce a stark non-linearity in economic models. As a result, standard solution methods for linear rational expectations models (Blanchard and Kahn, Reference Blanchard and Kahn1980; Binder and Pesaran, Reference Binder and Pesaran1997; Uhlig, Reference Uhlig, Marimo and Scott1999; Sims, Reference Sims2002), which assume a time-invariant structure, must be adapted to cope with such constraints. An important contribution to the literature was made by Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015). They show how otherwise-linear rational expectations models with occasionally-binding constraints and many state variables can be solved using a guess and verify method, and they also provide a toolkit (OccBin) that implements the solution algorithm in the popular software package Dynare. When a solution exists, their algorithm finds one solution under perfect foresight and assuming zero anticipated future shocks. However, it is known that models with occasionally-binding constraints may have multiple perfect foresight equilibria (see Holden, Reference Holden2023), and neglecting these additional solutions may have non-trivial quantitative or policy implications.
In this paper, we therefore extend the Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015) solution method so that multiple perfect foresight equilibria can be detected and simulated. Using this approach, we show how researchers can compute expected outcomes (such as welfare measures) when there are multiple perfect foresight solutions, and we also show how to run stochastic simulations with switching between equilibria on the simulated path. Our algorithm also extends the solution of Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015) to allow non-zero “news shocks” while preserving computational tractability.Footnote 1 We demonstrate our approach using a running example (Fisherian model) and a policy application that studies conventional versus unconventional interest rate rules in a New Keynesian model with a zero lower bound and multiple equilibria.
The modern literature on occasionally-binding constraints began with Eggertsson and Woodford (Reference Eggertsson and Woodford2003) and Jung et al. (Reference Jung, Teranishi and Watanabe2005), who study the benchmark New Keynesian model with a zero lower bound on nominal interest rates, the former in a version of the model with a two-state Markov process and the latter under perfect foresight. The papers most relevant to the current paper are those which study perfect foresight solutions to models with occasionally-binding constraints, including the computational papers of Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015), Holden (Reference Holden2016), Boehl (Reference Boehl2022), and the theory paper by Holden (Reference Holden2023). Most of these papers focus on dynamic models which are linear aside from occasionally-binding constraints (as here), and only Holden (Reference Holden2016, Reference Holden2023) considers multiple equilibria.
The present paper makes two methodological contributions. First, relative to Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015), we extend their solution method (based on undetermined coefficients) to detect and simulate multiple perfect foresight equilibria, and also to allow non-zero anticipated news, such as “forward guidance” shocks. Our algorithm allows a much wider range of economic scenarios, as additional solutions and simulations paths are not neglected, while the addition of “news shocks” means that announcements or news events can be studied. Second, as argued by Farmer et al. (Reference Farmer, Khramov and Nicolò2015, 17), “a model with an indeterminate set of equilibria is an incomplete model.” The incompleteness problem can be resolved by drawing a sunspot that picks an equilibrium (i.e. a particular perfect foresight path). On the one hand, a unique realized equilibrium path is selected and can be studied, but as a result one particular solution path is given precedence over others. We attempt to speak to both views by providing an approach for computing expected paths that summarize “average outcomes” given multiple solutions (an approach not used in Holden (Reference Holden2023)), and we also show how to run stochastic simulations with switching between equilibria on the simulated path in response to unanticipated shocks (here we use an extended path method).
Our paper also contributes to the literature by providing policy applications. First, in the methodology section we include a simple “running example”, namely a Fisherian model with multiple equilibria: both a high-inflation and low-inflation solution exist for the same initial conditions. We use this simple example to illustrate our expected outcomes approach based on “prior probabilities” and the extension to stochastic simulation with switching between multiple equilibria. We also show how these approaches can be used for policy analysis.
Our main policy application studies a New Keynesian model with a zero lower bound on nominal interest rates and multiple equilibria for some parameter values (Brendon et al. Reference Brendon, Paustian and Yates2013). Here we show that our algorithm replicates their finding of two perfect foresight equilibria: there is a “good” solution for which the lower bound is not hit, and a “bad” solution based on self-fulfilling pessimistic expectations, for which interest rates spend some time at the bound and inflation and the output gap are strongly negative and persistent. Multiplicity occurs under a conventional “growth-based” interest rate rule that includes an inflation target and a response to the first-difference in the output gap – the “speed limit.” Such speed-limit policies are considered attractive by some policymakers and have some known stabilization advantages, including robustness to output gap measurement error;Footnote 2 however, by following such rules, policymakers may inadvertently bring about indeterminacy. A natural question is then: would unconventional monetary policy rules restore determinacy and stabilize the economy while retaining the potential advantages of a speed limit?
We answer this question by studying two unconventional policies—price-level targeting and forward guidance—which provide stimulus at the lower bound and may be useful to rule out (or mitigate) multiplicity of equilibria based on past work. For forward guidance, we consider a large number of announcements promising low interest rates and find that multiplicity generally prevails and that the “good” and “bad” solutions have inflation and output gaps exacerbated relative to a more conventional interest rate rule. By comparison, an interest rate rule that replaces the inflation target with a price-level target shrinks but does not eliminate the indeterminacy region, and when both solutions exist the “bad” solution is often comparatively tame in terms of inflation and output gap deviations.
These results shed new light on a conclusion in Holden (Reference Holden2023). In particular, while he shows that Taylor-type rules with a price-level target and an output gap level target ensure uniqueness in a range of New Keynesian models, our “speed limit” application shows that an interest rate rule with a price-level response is not (in general) sufficient to ensure determinacy in New Keynesian models with a zero lower bound on nominal interest rates. Furthermore, we find that a price-level targeting rule outperforms the other interest rate rules we study in terms of social welfare.
The paper proceeds as follows. Section 2 outlines the solution method and describes how we extend the benchmark algorithm to study multiple equilibria. Section 3 provides details of our “prior probabilities” approach to simulating multiple equilibria, including expected outcomes and the construction of stochastic simulations. Section 4 presents our policy application in a New Keynesian model. Finally, Section 5 concludes.
2. Methodology
Consider a multivariate rational expectations model with perfect foresight. The model is linear aside from multiple possible regimes due to occasionally-binding constraints, and time is discrete:
$t \in \mathbb {N}_+$
. As in Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015), we focus for exposition purposes on the case of a single occasionally-binding constraint, so there are two regimes.Footnote
3
The reference regime (slack) is described by (1), and the alternative regime (bind) by (2):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn1.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn2.png?pub-status=live)
where
$x_t$
is an
$n \times 1$
vector of endogenous state and jump variables,
$E_t$
is the conditional expectations operator, and
$e_t$
is an
$m \times 1$
vector of exogenous “news shocks” whose values are known. Note that serially correlated exogenous processes can be included in the vector
$x_t$
.
Matrices
$\overline {B}_i, \tilde B_i$
,
$i \in [5]$
, contain the model parameters. The
$\overline {B}_i, \tilde B_i$
,
$i \in \{1,2,3\}$
, are
$n \times n$
matrices,
$\overline {B}_4, \tilde B_4$
are
$n \times m$
matrices, and
$\overline {B}_5, \tilde B_5$
are
$n \times 1$
vectors of intercepts. As shown in Binder and Pesaran (Reference Binder and Pesaran1997), the above formulation is quite general as it can accommodate multiple leads and lags of the endogenous variables through an appropriate definition of
$x_t$
.
The first variable
$x_{1,t}$
is subject to a lower bound constraint in all periods:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn3.png?pub-status=live)
where
$\underline {x}_{1} \in \mathbb {R}$
is the lower bound and
$x_{1,t}^*$
is the “shadow value” of the constrained variable. Here,
$F$
is
$1 \times 3n$
vector with
$f_{11} = 0$
;
$G$
is a
$1 \times m$
vector; and
$H \in \mathbb {R}$
is a scalar.
The specification in (3) allows the constrained variable to depend on the news shocks and on contemporaneous, past or future values of the endogenous variables; note that an upper bound constraint can easily be accommodated.Footnote
4
The vectors
$F,G,H$
are given by the equation that describes the bounded variable when the constraint is slack; for example, with a lower bound on nominal interest rates, this equation is typically a Taylor(-type) rule.
In typical applications, one of the intercept matrices may be zero, as DSGE models are often log-linearized around a non-stochastic steady state (see Uhlig, Reference Uhlig, Marimo and Scott1999). Given mutually exclusive regimes, we introduce an indicator variable
$\unicode{x1D7D9}_{\{ x_{1,t}^* \gt \underline {x}_{1} \}}$
that is equal to 1 if
$x_{1,t}^* \gt \underline {x}_{1}$
and 0 otherwise (i.e. if
$x_{1,t}^* \leq \underline {x}_{1}$
). Our model based on (1)–(3) is then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn4.png?pub-status=live)
where
$x_{0} \in \mathbb {R}^n$
(given) and all news shocks
$e_1, e_2,\ldots \in \mathbb {R}^m$
are known.
The information set at time
$t$
includes all current, past and future values of the endogenous and exogenous variables; note that the indicator
$\unicode{x1D7D9}_{\{ x_{1,t}^* \gt \underline {x}_{1} \}}$
is endogenous. As in Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015) and Holden (Reference Holden2023), we assume the model returns to the reference regime forever after some finite date
$T \geq 1$
(i.e.
$\unicode{x1D7D9}_{\{ x_{1,t}^* \gt \underline {x}_{1} \}} = 1$
$\forall t \gt T$
).Footnote
5
Following Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015), we find
$(\unicode{x1D7D9}_{\{ x_{1,t}^* \gt \underline {x}_{1} \}})_{t=1}^{T}$
using a guess-verify method. That is, we guess a sequence of regimes and date
$T$
, and accept the resulting time path
$(x_t)_{t \geq 1}$
as a solution only if the guessed sequence of regimes is verified by the shadow variable
$x_{1,t}^*$
.
2.1 Preliminaries
Definition 1.
A perfect foresight solution to model (
4
) is a path
$x_t =f(x_{t-1}, (e_s)_{s=t}^\infty )$
such that the system in (
4
) holds for all
$t \geq 1$
given a known sequence of news shocks
$(e_t)_{t=1}^\infty$
.
An alternative way of characterizing a solution is in terms of a set of matrices
$\{\Omega _t, \Gamma _t,\Psi _t\}_{t=1}^\infty$
that generalize the constant-coefficient decision rules of a linear rational expectations model:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn5.png?pub-status=live)
where
$\Omega _t$
is an
$n \times n$
matrix,
$\Gamma _t$
is an
$n \times m$
matrix,
$\Psi _t$
is an
$n \times 1$
vector, and the
$t$
subscript indicates that the matrices are in general time-varying.
Following Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015) and Kulish and Pagan (Reference Kulish and Pagan2017), the matrices
$\Omega _t,\Gamma _t,\Psi _t$
are determined recursively using simple formulas. Our perfect foresight assumption implies that the solution
$x_t$
will generally depend on both current shocks
$e_t$
and anticipated future shocks
$e_{t+1}, e_{t+2},\ldots$
, which enter the solution via the “intercept” matrix
$\Psi _t$
.
There are three key requirements for the application of our algorithm:
-
(i) Existence of a rational expectations solution at the reference regime (terminal solution).
-
(ii) A series of regularity conditions
$\det [ B_{1,t} - B_{2,t} \Omega _{t+1} ] \neq 0$ must hold for
$t=1,\ldots, T$ , where
$T+1$ is a date from which the terminal solution applies in perpetuity.
-
(iii) The solution path
$x_t$ must satisfy (4) and
$x_{1,t} \gt \underline {x}_{1}$ for all
$t\gt T$ (terminal condition).
Requirements (i)–(iii) also apply in Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015). The terminal solution in (i) can be found using standard methods, such as Blanchard and Kahn (Reference Blanchard and Kahn1980), Binder and Pesaran (Reference Binder and Pesaran1997), Sims (Reference Sims2002) or Dynare (Adjemian et al. Reference Adjemian, Bastani, Juillard, Mihoubi, Perendia, Ratto and Villemot2011), which can check if the solution is unique. Requirement (i) is necessary but not sufficient for existence of a solution; the regularity conditions in (ii) must hold, and the perfect foresight path must satisfy the occasionally-binding constraint and the terminal condition in (iii), as in Holden (Reference Holden2023).
We assume the Blanchard-Kahn conditions for uniqueness and stability are satisfied and that the terminal solution is away from the lower bound for all
$t\gt T$
. Formally, we have:
Assumption 1.
We assume
$\det [\overline {B}_1 - \overline {B}_2 - \overline {B}_3] \neq 0$
, such that there exists a unique steady state
$\overline {x} = (\overline {B}_1 - \overline {B}_2 - \overline {B}_3)^{-1} \overline {B}_5$
at the reference regime. This steady state satisfies
$\overline {x}_1 \gt \underline {x}_{1}$
.
Assumption 2.
For any initial value, there is a unique stable terminal solution at the reference regime
$x_t = \overline \Omega x_{t-1} + \overline \Psi$
,
$\forall t\gt T$
, where
$\overline \Psi = (\overline B_1 - \overline B_2 \overline \Omega )^{-1}( \overline B_2 \overline \Psi + \overline B_5) = (I_n - \overline {\Omega }) \overline {x}$
, and
$\overline \Omega = (\overline B_1 - \overline B_2 \overline \Omega )^{-1} \overline B_3$
has eigenvalues in the unit circle, so
$x_t \rightarrow \overline {x}$
as
$t \rightarrow \infty$
.
Assumption 3.
Agents know all future shocks
$(e_t)_{t=1}^\infty$
, and
$e_t = 0_{m \times 1}$
for all
$t\gt T$
.
Assumptions1–3 are analogous to the assumptions in Holden (Reference Holden2023, Supp. Appendix). Assumption1 restricts attention to models with a unique steady state
$\overline {x}$
at the reference regime that does not violate the lower bound constraint. Assumption2 ensures that the terminal solution at the reference regime converges to this steady state. Lastly, Assumption3 states that agents have perfect foresight and that news shocks “die out” after date
$T$
.
2.2 Solution algorithm
Recall that date
$t=1$
is the first period. Given perfect foresight, expectations coincide with future values:
$E_t[x_{t+1}] = x_{t+1}$
for all
$t \geq 1$
. The system to be solved is therefore:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn6.png?pub-status=live)
where
$B_{i,t} = \unicode{x1D7D9}_{\{ x_{1,t}^* \gt \underline {x}_{1} \}} \overline B_i + (1-\unicode{x1D7D9}_{\{ x_{1,t}^* \gt \underline {x}_{1} \}})\tilde {B}_i$
$\forall i \in [5]$
; see (4).
By assumption, the reference regime holds for all
$t \gt T$
, and the terminal solution
$x_t =\overline \Omega x_{t-1} + \overline \Psi$
is away from the bound. Thus, agents can use backward induction from the terminal solution
$x_{T+1} = \overline \Omega x_{T} + \overline \Psi$
in period
$T$
, giving the following solution algorithm which uses a “guess and verify” approach. Note that guesses on the indicator variable are denoted by
$\unicode{x1D7D9}_t \in \{0,1\}$
because some guesses on the sequence of regimes may not be verified.
-
1. Pick a
$T \geq 1$ and a simulation length
$T_s\gt T$ . Guess a sequence
$(\unicode{x1D7D9}_t)_{t=1}^T$ of 0s and 1s, starting with all 1s (slack in all periods) as an initial guess. Note:
$\unicode{x1D7D9}_t=1$ for
$t\gt T$ .
-
2. Find the structural matrices (or “regimes”) implied by the guess:
\begin{equation*} B_{i,t} = \unicode{x1D7D9}_t \overline B_i + (1-\unicode{x1D7D9}_t)\tilde {B}_i, \quad i \in [5], \quad \hskip 0.1cm \text {in periods $t=1,\ldots, T_s.$}\end{equation*}
-
3. Compute
$(x_t)_{t=1}^{T_s}$ and the shadow value of the bounded variable
$(x_{1,t}^*)_{t=1}^{T_s}$ via
\begin{equation*} x_t = \begin{cases} \Omega _{t} x_{t-1} + \Gamma _t e_t + \Psi _{t} & \text {for $1 \leq t \leq T$ } \\ \overline \Omega x_{t-1} + \overline \Psi & \text {for $t \gt T$ } \end{cases}, \quad x_{1,t}^* = F \begin{bmatrix} x^{\prime}_t & x^{\prime}_{t+1} & x^{\prime}_{t-1} \end{bmatrix}^{\prime} + G e_t + H \end{equation*}
$t=1,\ldots, T$ and initial matrices
$\Omega _{T +1} = \overline \Omega$ ,
$\Psi _{T +1} = \overline \Psi$ ,
$\Gamma _{T +1} = 0_{n \times m}$ ,
\begin{equation*} \Omega _{t} = (B_{1,t} - B_{2,t} \Omega _{t+1})^{-1} B_{3,t}, \hskip 0.8cm \Gamma _{t} = (B_{1,t} - B_{2,t} \Omega _{t+1})^{-1} B_{4,t} \end{equation*}
\begin{equation*} \Psi _{t} = (B_{1,t} - B_{2,t}\Omega _{t+1} )^{-1} (B_{2,t} (\Psi _{t+1} + \Gamma _{t+1} e_{t+1}) + B_{5,t} ). \end{equation*}
-
4. If
$x_{1,t} = \max \{ \underline {x}_{1}, x_{1,t}^* \}$ for
$t=1,\ldots, T$ and
$x_{1,t} \gt \underline {x}_{1}$
$\forall t\gt T$ , accept the guess and store the solution
$(x_t)_{t=1}^{T_s}$ ; else reject. Return to Step 1 and repeat for a new guess.
The above algorithm has two additions relative to Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015). First, it allows for multiple perfect foresight solutions. If a guessed structure is verified (Step 4), then the resulting time path is accepted as a perfect foresight solution and is stored for later use, along with any additional solutions found by repeating Steps 1–4 with new guesses. Second, the algorithm permits the inclusion of “news shocks” up to a finite horizon, whereas the original algorithm sets future shocks at zero. Conveniently, our solution has the same general form as in Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015) because the intercept matrix
$\Psi _t$
(Step 3) incorporates the news shocks, and is determined by a recursive formula (it depends on its own future value). Hence, this generalization—which can be used to model pre-announced policies or other news events—comes at essentially zero computational cost.Footnote
6
In Section 3 below, we present a method for performing model simulations when there are multiple equilibria. As argued by Farmer et al. (Reference Farmer, Khramov and Nicolò2015, p. 17), “a model with an indeterminate set of equilibria is an incomplete model” and will need to be closed by some means in order to simulate or estimate the model.Footnote 7 In general, a model with multiple equilibria may be closed either by using some “selection criterion” to rule out all equilibria but one on some economic grounds (such as learnability), or by drawing an exogenous “sunspot” (a non-fundamental shock from outside the model) that determines which equilibrium agents’ expectations coordinate on. In the absence of a generic selection criterion, we take the latter sunspot approach. Note that this approach does not contradict the perfect foresight assumption (which implies that risk is absent) because the exogenous sunspot is not part of the solution path itself from date 1 onwards, but is only a means of initially determining (at date 1) which of the perfect foresight solutions will “play out.”
We start out by formalizing our approach in which the solutions are treated as categorical and a sunspot determines which perfect foresight path is realized (see Remark 1). We give the sunspot a simple structure (uniform random variable) such that the probability that agents will coordinate their expectations on a particular equilibrium can be interpreted in terms of “prior beliefs.” Hence, if there were a bank run equilibrium and a no-run equilibrium, then the researcher may assign a low probability to the “run equilibrium” if they view it as somewhat implausible. At the other extreme, a researcher may take an agnostic approach by assigning equal probability to each equilibrium—“flat priors”—and in this case the agents will have equal probability of coordinating on any given equilibrium path.
With this simple structure in hand, we show how researchers or policymakers can compute expected outcomes, such as average paths or expected welfare under the “veil of ignorance” (see Section 3.2).Footnote 8 As a result, this approach allows welfare evaluation to be conducted in the face of multiple solutions, as well as allowing robustness analysis that checks sensitivity to changes in the underlying probabilities. Both exercises may be useful to policymakers or researchers who would like to summarize economic outcomes when there are multiple solutions which cannot be ruled out a priori or assigned indisputable probabilities.
We also provide an extension for stochastic simulation in Section 3.3. We thereby generalize the stochastic simulation approach in Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015) to the more difficult case of multiple equilibria and non-zero expected future shocks. In particular, we relax the perfect foresight assumption by allowing unanticipated shocks after period 1 and solve the model using an extended path method with risk ignored by agents when they form expectations.Footnote 9 Along such simulation paths, switching between different equilibria can occur and these switches can contribute substantially to macroeconomic fluctuations.
2.3 Implementing the algorithm
Our “guess and verify” algorithm builds on Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015) and uses the simple idea that continuing the guess-verify procedure after one solution has been found may yield additional solutions to the linear complementarity problem.Footnote
10
As in Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015), finding a solution requires inversions of the matrix
$(B_{1,t} - B_{2,t}\Omega _{t+1} )$
for
$t=1,\ldots, T$
, as in Step 3 of the Algorithm above. In cases of non-invertibility, our algorithm automatically abandons the current guess and starts a new one so that computation time is not wasted. Our algorithm is written in MATLAB and codes are available at the author’s GitHub page.Footnote
11
An important issue when using guess-verify is how the guessed sequences of regimes,
$(\unicode{x1D7D9}_t)_{t=1}^T$
, are determined. We first try as an initial guess the case where the constraint is slack in all periods (see Step 1) and then guesses which involve a “single spell” at the lower bound (i.e. guesses differ in the duration of the spell at the bound or the start date), followed by “double spells” at the bound. We provide codes that enumerate the single and double spells for a given
$T$
and use these as guesses in a sequential manner; we also provide a code covering cases of triple spells at the bound.Footnote
12
Note that if not all columns of the “guesses matrix” are filled, the remaining columns are filled with random guesses of 0 s and 1 s. As an example of a model with multiple spells at the bound, we study a multiplier-accelerator model with forward-looking expectations in the Supplementary Appendix (final example); in this model, there are recurrent fluctuations in GDP not driven by structural (news) shocks.Footnote
13
A second important issue is when to terminate the guess-verify procedure. As shown by Holden (Reference Holden2023), if a square matrix
$M$
of impulse responses of the bounded variable to the “news shocks” has the property of being a
$P$
-matrix, there is a unique solution to the linear-complementarity problem for all initial conditions. In this case, it makes sense to terminate the guess-verify procedure when a solution is found (there can be only one). We therefore check if
$M$
is a
$P$
-matrix using our algorithm, and if so, we terminate the search procedure immediately after finding a solution. Note that matrix
$M$
is a
$P$
-matrix if and only if all its principal minors are positive; if not, there may be multiple solutions or no solution.
It is generally computationally-intensive to check if a matrix
$M \in \mathbb {R}^{T \times T}$
is a
$P$
-matrix, and computation time increases sharply with the matrix dimension,
$T$
. In our algorithm, we rely on the recursive test for
$P$
-matrices in Tsatsomeros and Li (Reference Tsatsomeros and Li2000), for which the time complexity is exponential in
$T$
with base 2; however, to avoid running the test at times when this is not necessary, we have also built some pre-checks into our algorithm.Footnote
14
2.4 Fisherian example
Following Holden (Reference Holden2023, Example2) suppose that for all
$t \geq 1$
our model consists of a Taylor-type rule subject to a zero lower bound and the Fisher equation:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn7.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn8.png?pub-status=live)
where
$\phi - \psi \gt 1$
,
$\psi \gt 0$
,
$\pi _{0} \in \mathbb {R}$
and
$r\gt 0$
is a fixed real interest rate. To simplify presentation, we set
$\phi =2$
. The results are not specific to this case.
There are two solutions to the model (7)–(8). The first solution is away from the bound in all periods and given by
$\pi _t = \omega \pi _{t-1}$
,
$i_t = r + \omega \pi _{t}$
for all
$t \geq 1$
, where
$\omega = 1 - \sqrt {1-\psi } \in (0,1)$
. This solution is stable (inflation converges to 0 and nominal rates to
$r$
), does not violate the lower bound in period 1 if
$r + \phi \pi _1 - \psi \pi _{0} \geq 0$
and is away from the bound for all
$t \gt 1$
if
$r + \phi \pi _t - \psi \pi _{t-1} \gt 0$
; hence this solution exists for
$\pi _{0} \geq -\frac {r}{\omega ^2}$
.Footnote
15
The second solution has the constraint binding only in period 1, i.e.
$i_1 = 0$
and
$\pi _t = \omega \pi _{t-1}$
,
$i_t = r + \omega ^2 \pi _{t-1}$
for all
$t \gt 1$
. Note that
$i_1 = 0$
implies that
$\pi _2 = - r$
by (8), so
$\pi _1 = - r/\omega$
and
$\pi _t = \omega \pi _{t-1} = \omega ^{t-2}(-r)$
for all
$t \gt 1$
(deflation). The constraint binds in period 1 provided
$r + \phi \pi _1 - \psi \pi _{0} \leq 0$
and is escaped thereafter if
$r + \phi \pi _t - \psi \pi _{t-1} \gt 0$
for all
$t \gt 1$
, so we again require
$\pi _{0} \geq -\frac {r}{\omega ^2}$
.Footnote
16
Hence, for
$\pi _0 \geq -\frac {r}{\omega ^2}$
both solutions exist, and for
$\pi _{0} \lt -\frac {r}{\omega ^2}$
there is no solution.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_fig1.png?pub-status=live)
Figure 1. The two solutions when
$\pi _0\gt 0$
.
Our Algorithm finds both these solutions (see Supplementary Appendix, Section 2). The two solutions are plotted in Figure 1, along with the shadow interest rate
$i_t^*$
in both cases. Note that Solution 1 has a positive shadow rate in all periods (which coincides with the actual interest rate); hence this solution is verified. By comparison, Solution 2 hits the bound in period 1 and has a negative shadow rate in this period (so the constraint binds); hence this solution is also verified. If we assign values to all the parameters (as in our codes), then numerical analysis using our algorithm indicates that the matrix
$M$
of impulse responses is not a
$P$
-matrix; note that this makes sense given the results above: we saw that there are multiple solutions (if initial inflation
$\pi _{0}$
is high enough) or no solution (if
$\pi _{0}$
is too low).
3. Simulating multiple equilibria
As argued by Holden (Reference Holden2023), multiple equilibria are a robust feature of otherwise-linear models with occasionally-binding constraints. Therefore, it is important that solution algorithms do not neglect multiplicity. In this section we provide our approach to simulating multiple equilibria; we therefore assume throughout this section that there are multiple solutions.
3.1 Determining an equilibrium path
Suppose
$K \geq 2$
perfect foresight solutions are found using our Algorithm. To resolve the indeterminacy problem (incompleteness), we consider a “sunspot” approach rather than trying to “purge” solutions based on a selection criterion. We assume the researcher or policymaker has some “prior probabilities”
$p_1,\ldots, p_K \in [0,1]$
,
$\sum _{k=1}^K p_k = 1$
, over the different perfect foresight solutions, where
$p_k$
is the probability that agents will coordinate expectations on equilibrium
$k$
at date 1 (the first period in which expectations are formed).
The simple first step is to draw a sunspot at date 1 that represents this coordination and will thereby ensure that a particular solution (i.e. perfect foresight path) is realized. We index the
$K$
perfect foresight solutions by
$(x_t^k)_{t=1}^\infty$
for
$k=1,\ldots, K$
, and let
$u_1 \sim \mathcal {U}_{(0,1)}$
be a sunspot that is drawn from the uniform distribution on the interval
$(0,1)$
at date
$t=1$
. Given all this, we have the following rule for determining which equilibrium is realized given the sunspot
$u_1$
, which is analogous to how the values of categorical random variables, such as realizations of
$K$
-state Markov processes or outcomes of rolling a die, are determined.
Remark 1.
Suppose there are
$K \geq 2$
perfect foresight solutions. Given probabilities
$p_1,\ldots, p_K$
and a random draw
$u_1 \sim \mathcal {U}_{(0,1)}$
, the realized perfect foresight path at date 1 is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn9.png?pub-status=live)
i.e. for
$u_1 \in (\sum _{k=0}^{k^*-1} p_{k},\sum _{k=0}^{k^*}p_{k}]$
, where
$k^* \in \{1,\ldots, K\}$
and
$p_0 \,:\!=0$
, the unique (realized) perfect foresight solution is
$(x_t)_{t=1}^\infty = (x_t^{k^*})_{t=1}^\infty$
.
Remark 1 simply gives a way to choose a realized perfect foresight path when there are multiple candidates. It applies to any finite set of perfect foresight solutions and is flexible due to the free specification of probabilities. For instance, if the researcher thinks some solution(s) somewhat “unrealistic” they may attach low probability to those solution paths. On the other hand, a perfectly agnostic researcher would use “flat priors”
$p_k = 1/K$
for all
$k$
.
As noted, our Algorithm stores all (found) perfect foresight solutions. Given some specified probabilities
$p_1,\ldots, p_K$
, Remark 1 will then choose one solution as the realized equilibrium, akin to a lottery among perfect foresight paths (we give an example in Section 3.3). With this addition, our algorithm can be used to automate equilibrium determination (to some extent) while taking account of prior beliefs and their economic implications.Footnote
17
Further, we will show that combining Remark 1 with some additional timing assumptions allows the computation of expected outcomes as a probability-weighted average of perfect foresight paths, such that expected welfare or other outcomes may be evaluated under the “veil of ignorance” – i.e. before the sunspot is realized. We do this in the next section.
3.2 Expected outcomes
Suppose now that before expectations are formed in period 1, there is some initial state, period 0, in which the initial conditions
$x_0, (e_t)_{t=1}^\infty$
are known, but the sunspot
$u_1$
and thus the expectations
$x_1,x_2,\ldots, $
are not. To motivate this, note that if the sunspot
$u_1$
were determined at date 0, multiplicity could not arise since expectations would already have coordinated on a particular equilibrium. Viewed this way, the sunspot
$u_1$
coordinates agents’ expectations on a particular equilibrium in period 1 and so “stands in” for some psychological process or “animal spirits” that are extraneous to the economy.Footnote
18
By contrast, the news shocks
$(e_t)_{t=1}^\infty$
are structural and we take them as a predetermined aspect of the economic environment.
Thus, we think of date 0 as an initial position that is subject to the “veil of ignorance” in that the expectations of agents—an aspect of their behavior—are unknown and not pinned down. Given this view, an expected path can be defined as follows.
Definition 2.
An expected path is a linear combination of perfect foresight solutions in which the weights are the probabilities of each solution,
$p_1,\ldots, p_K$
.
Based on Definition2, the expected path of the vector of endogenous variables
$x_t$
is:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn10.png?pub-status=live)
where the expected values are
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn11.png?pub-status=live)
i.e. a linear combination of the date-
$t$
points of the perfect foresight solutions
$1,\ldots, K$
.
Similarly, one may compute expected welfare, for example, based on a quadratic loss function or some other approximation to a social welfare function. If
$W_k \in \mathbb {R}$
is the welfare associated with solution
$k$
, then analogous to (11) the expected welfare is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn12.png?pub-status=live)
It should be emphasized that expressions such as (11) and (12) also provide a basis for robustness-type analysis in the sense that the impact of different “prior beliefs” can easily be studied and best and worst-case scenarios identified. We now give a simple example.
Example 1.
Consider again the Fisherian model with two solutions (Figure 1). A policymaker assigns probabilities
$p_1$
to Solution 1 (slack) and
$p_2 = 1-p_1$
to Solution 2 (bind), and has an ad hoc loss function that penalizes squared deviations from a zero inflation target:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqnU5.png?pub-status=live)
By
(12)
, the expected loss is
$E_0[L] = p_1 L_1 + p_2 L_2$
, where
$L_1 = \sum _{t=1}^\infty \beta ^{t-1} (\pi _t^1)^2 = \frac {\omega ^2}{1-\beta \omega ^2}\pi _0^2$
,
$L_2 = \sum _{t=1}^\infty \beta ^{t-1} (\pi _t^2)^2 = \frac {(r/\omega )^2}{1-\beta \omega ^2}$
, and
$\pi _t^1,\pi _t^2$
are the inflation solutions at date
$t$
. Hence,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqnU6.png?pub-status=live)
which is linear in
$p_1$
and quadratic in the initial inflation
$\pi _0$
.
In Figure 2 (left panel), we plot the expected loss as the probability of Solution 1,
$p_1$
, is increased from 0 to 1; we do this for several values of
$\pi _0\gt 0$
, including the initial inflation
$\pi _0^* = r/\omega ^2$
for which the losses are equal, i.e.
$L_1=L_2$
.Footnote
19
For inflation rates above
$\pi _0^*$
, the expected loss increases with
$p_1$
, since inflation declines geometrically from its initial value
$\pi _0$
under the “good”solution (see Figure 1). Going in the other direction, reducing initial inflation below
$\pi _0^*$
makes inflation under Solution 1 “closer”to the zero inflation target, so loss
$L_2$
is higher. For Solution 2, where the constraint binds in period 1, the loss
$L_2$
is independent of the initial inflation
$\pi _0$
, as is clear from Figure 1
(and the equation above).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_fig2.png?pub-status=live)
Figure 2. Expected loss
$E_0[L]$
as
$p_1$
is increased for various
$\pi _0$
(left panel) and the expected loss due to the zero lower bound
$E_0[L]-L_1$
as
$\pi _0$
is increased (right panel). The initial values in the left panel satisfy
$\pi _0^h \gt \pi _0^m\gt \pi _0^*\gt \pi _0^l$
, where
$\pi _0^*$
is the positive initial inflation that makes the loss under Solution 1,
$L_1(\pi _0)$
, equal to the loss
$L_2$
under Solution 2.
Figure
2
(right panel) plots the expected loss relative to Solution 1,
$E_0[L]-L_1$
, which can be interpreted as the extra loss attributable to the lower bound friction, which makes Solution 2 (binding constraint) a possible equilibrium path. The expected loss attributed to the lower bound falls as initial inflation is increased, since this raises the Solution 1 loss
$L_1$
by more than it raises the expected loss
$E_0[L]$
. Increasing the probability of Solution 1 “flattens”the relationship between
$E_0[L]-L_1$
and
$\pi _0$
since the expected loss given the lower bound,
$E_0[L_1]$
, gets closer to
$L_1$
the higher the probability attached to Solution 1 (right panel).
The above exercise can be motivated by thinking of an asset like central bank digital currency that could potentially eliminate the lower bound. From a policy perspective, we would like to know ex ante whether it is worth providing such a currency (at some cost); this will depend partly on the expected benefit of eliminating the lower bound on nominal rates, shown in the right panel. While this is a simple example, the results are suggestive of the policy and robustness-type analysis possible using our expected outcomes approach.
3.3 Stochastic simulations
Lastly, we consider stochastic simulations with switching between multiple equilibria. For such simulations, we assume that the agents think they know all future shocks but are mistaken; as result, they have imperfect foresight and ignore risk, such that their expectations are not strictly rational, in contrast to the perfect foresight solutions studied thus far.
We use this approach to construct stochastic simulations in which agents form expectations under certainty equivalence. Both Dynare (Adjemian et al. Reference Adjemian, Bastani, Juillard, Mihoubi, Perendia, Ratto and Villemot2011) and OccBin (Guerrieri and Iacoviello, Reference Guerrieri and Iacoviello2015) have built-in options for such simulations using an extended path method, which is widely used, for example, at policy institutions such as central banks.Footnote 20 While the extended path approach is typically applied to occasionally-binding constraint models with a unique solution (or assuming uniqueness), we provide an approach that allows simulations with multiple equilibria, which should speak to a wide audience.
Our approach draws on Remark 1, but it uses that approach to determine an equilibrium in every period
$t$
in which the model is simulated; note that this is necessary because while the entire solution path is known as of date 1 under perfect foresight, this is not true here since the actual (realized) path will generally differ from the one that agents expected. Hence, equilibrium determination (via a sunspot) arises every period in response to the new (unanticipated) initial conditions
$x_{t-1},e_t$
with which agents are confronted.
To make this concrete, suppose that the shock vector
$e_t$
is drawn from some distribution. Then given
$e_t$
,
$x_{t-1}$
and agents’ beliefs about future shocks, we can find the solution(s) for these initial conditions using our Algorithm, and one can be selected by a sunspot (akin to Remark 1). The same procedure is then repeated in period
$t+1$
, given
$x_t, e_{t+1}$
and the agents’ expected future shocks. Provided a solution exists for all simulated
$t$
, we can construct a stochastic simulation path of desired length as follows.
-
1. Choose some systematic rule for assigning probabilities to different equilibria at each date, e.g. the “flat priors” approach,
$p_k = \frac {1}{\text {no.\ of equilibria}}$ for each solution
$k$ .Footnote 21
-
2. Given
$x_0,e_1$ and expected shocks
$e_2^a,\ldots, e_T^a$ , use the Algorithm to find the solution paths
$(x_t^k)_{t=1}^{T}$ for
$k=1,\ldots, K_1$ , where
$K_1$ is the number of solutions at date 1. Using the approach in Remark 1, select one of these solutions,
$(x_t^{k_1^*})_{t=1}^{T}$ , where
$k_1^* \in \{1,\ldots, K_1\}$ . Set
$x_1 = x_1^{k_1^*}$ (our first simulated point) and move to period 2.
-
3. Draw vector
$e_2$ from some distribution. Given
$x_1,e_2$ and expected shocks
$e_3^a,\ldots, e_{T+1}^a$ , use the Algorithm to find the solution paths
$(x_t^k)_{t=2}^{T+1}$ for
$k=1,\ldots, K_2$ , where
$K_2$ is the number of solutions in period 2. Use the approach in Remark 1 to select one of these solutions,
$(x_t^{k_2^*})_{t=2}^{T+1}$ , where
$k_2^* \in \{1,\ldots, K_2\}$ . Set
$x_2 = x_2^{k_2^*}$ (second simulated point).
-
4. Repeat in periods 3, 4 etc. to get a simulation path of the desired length, say
$(x_t)_{t=1}^{T^*}$ .
We now show stochastic simulation in action using our running example (Fisherian model).
Example 2 (Ex. 1 cont’d). We continue the Fisherian example (Section
2.4
and Example
1
) but we add a shock
$e_t \in \mathbb {R}$
in the Taylor-type rule, so
$i_t = \max \{ 0, r + \phi \pi _t - \psi \pi _{t-1} + e_t\}$
as in Holden (Reference Holden2023, Section 2.3). We set parameter values:
$r=0.01$
,
$\phi =2$
,
$\psi =0.93$
,
$\pi _0=0.02$
, along with date-
$1$
anticipated shocks
$e_1=e_2^a=-0.001$
. The probability of choosing Solution 1 (away from the zero bound) is set at
$p_1 = 0.95$
, so there is a 5% probability of choosing the Solution 2 that hits the bound. At dates
$t\gt 1$
we solve for
$i_t,\pi _t$
conditional on the inherited state
$\pi _{t-1}$
and fresh draws for the monetary policy shocks
$e_t,e_{t+1}$
which are drawn from a normal distribution with a mean of zero and a standard deviation
$\sigma _e$
(see Figure 3).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_fig3.png?pub-status=live)
Figure 3. Five stochastic simulations:
$p_1=0.95$
and initial values
$\pi _0=0.02$
,
$e_1,e_2=-0.001$
.
We found two solutions in all simulated periods; these are versions of the “high”and the “low”inflation solutions in Figure 1
. To resolve the indeterminacy of two solutions, at each date
$t$
we drew a sunspot
$u_t \sim \mathcal {U}_{(0,1)}$
that selects either Solution 1 (away from bound) or Solution 2 (hits bound) in period
$t$
. Given our assumption that
$p_1 = 0.95$
, Solution 1 (Solution 2) is selected at date
$t$
if and only if
$u_t \in (0,0.95]$
(
$u_t \gt 0.95$
); see Remark
1
.
In the upper panel of Figure 3 , the standard deviation of the policy shock is very small to isolate the impact of the sunspot, i.e. switching between the two equilibria. Of five simulations, three hit the zero lower bound in some period (see dashed lines), giving strong deflation (cf. Figure 1), in contrast to the solution paths that always remain away from the bound.
In the lower panel, the shock variance is large enough to make each individual stochastic simulation path discernible, but the main variations in inflation and interest rates come from switching between equilibria rather than from disturbances to the policy rule. Note that due to switching between equilibria, the average simulated values of inflation and nominal rates, in a long simulation, may differ somewhat from those at the terminal solution steady-state.
4. Policy application
We now consider an application to policy rules with multiple equilibria for some parameters. We make use of several concepts discussed so far (sunspots, expected outcomes,
$M$
matrix) and implementation details are discussed in the Supplementary Appendix. Since we restrict attention to perfect foresight solutions, we do not use stochastic simulation (Section 3.3).Footnote
22
4.1 A New Keynesian model
We consider the New Keynesian model studied in Brendon et al. (Reference Brendon, Paustian and Yates2013). Besides the zero lower bound, the only other departure from the benchmark model is a policy response to the change in the output gap, similar to the “speed limit” policies considered by Walsh (Reference Walsh2003):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn13.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn14.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn15.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn16.png?pub-status=live)
where
$\theta _\pi \gt 1$
,
$\beta \in (0,1)$
,
$\theta _{\Delta y}, \kappa, \sigma \gt 0$
,
$\rho _i \in [0,1)$
,
$\underline {i} = \beta -1$
and all values of
$e_t$
are known.
We start by setting parameters at
$\beta = 0.99$
,
$\sigma =1$
,
$\rho _i=0$
(no interest rate smoothing) and
$\kappa = \frac {(1-0.85)(1-0.85\beta )}{0.85}(2+\sigma )$
as in Brendon et al. (Reference Brendon, Paustian and Yates2013); additionally, we set
$\theta _\pi = 1.5$
and
$\theta _{\Delta y}=1.6$
to initially replicate the exercise in Holden (Reference Holden2023, Appendix E). Starting at steady state, we consider a 1% demand shock at date 1 (i.e.
$e_1=0.01$
,
$e_t = 0$
for
$t\gt 1$
) and search for solutions to model (13)–(16) using our algorithm. We found two perfect foresight solutions as expected and the solutions match the ones reported by Holden (see Figure 4).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_fig4.png?pub-status=live)
Figure 4. Multiple equilibria in the Brendon et al., model:
$e_1=0.01$
and
$i^*_0=y_0=\rho _i=0$
.
There are two perfect foresight solutions in Figure 4: one where the lower bound is never hit and inflation and the output gap rise only marginally above their steady-state values; and a second solution where interest rates are at the lower bound in the first two periods and there is strong and persistent deflation and large negative output gaps (Figure 4, all panels). This “bad” solution is clearly inferior in terms of stabilization of inflation and output gaps and arises due to pessimistic self-fulfilling expectations: if agents expect low inflation, then the rise in the real rate lowers the output gap and inflation, validating the expectations.
Having a growth-based interest rate rule—for which the shadow rate responds to inflation and the change in the output gap—is important for the multiplicity result, as there is a unique perfect foresight solution if the shadow rate follows a Taylor rule with a response to the output gap in levels (Holden, Reference Holden2023, Section 4.3). In addition, multiplicity occurs only when the response to the change in the output gap—or “speed limit”—is strong enough, as is clear from Figure 5 below. Such speed limit policies have been studied in the literature, with encouraging results from both theoretical and practical perspectives (Walsh, Reference Walsh2003; Orphanides, Reference Orphanides2003; Yetman, Reference Yetman2006) and some empirical works suggest that central bank behavior is consistent with a speed limit rule (e.g. Mehra, Reference Mehra2002).Footnote 23
Given the potential negative consequences of the interest rate rule (14) in this model, we consider some alternative policy rules, to see if they can restore uniqueness by eliminating the “bad” solution. Before doing so, we first confirm that multiplicity is a robust feature of this model by studying the properties of the
$M$
-matrix (discussed in Section 2.3 above).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_fig5.png?pub-status=live)
Figure 5. Regions in which
$M$
is not a
$P$
-matrix (black) when
$T = 16$
(Case:
$\rho _i=0$
).
In Figure 5, we show some parameter regions for which the
$M$
matrix of impulse responses is a
$P$
matrix and is not a
$P$
matrix. Recall that if the
$M$
matrix is a
$P$
-matrix, then there is a unique solution for all initial conditions. We set
$T=16$
and plot the regions for which the
$M$
matrix is a
$P$
-matrix (determinacy region, white), and is not a
$P$
-matrix (black region). We consider different combinations of the response coefficients
$\theta _\pi$
,
$\theta _{\Delta y}$
in the interest rate rule and in each panel we vary the inverse elasticity of intertemporal substitution,
$\sigma$
.
Figure 5 shows there is a unique solution if the response to the change in the output gap,
$\theta _{\Delta y}$
, is not too strong relative to the inflation response
$\theta _\pi$
(see the white regions). In the first panel, which uses the baseline value of
$\sigma =1$
, we see that
$M$
is a
$P$
-matrix only if the response to the change in the output gap is smaller than the response coefficient on inflation. Note that the parameter combination used in Figure 4 (
$\theta _\pi =1.5, \theta _{\Delta y} = 1.6$
), where we see two solutions, lies in the black region as expected. In fact, Brendon et al. (Reference Brendon, Paustian and Yates2013, Proposition 1) show that the model (13)–(16) has multiple perfect foresight equilibria if and only if
$\theta _{\Delta y} \gt \sigma \theta _\pi$
, and Figure 5 is consistent with this conclusion.
In summary, multiplicity is a robust outcome and this raises the question of whether alternative monetary policies could restore uniqueness by eliminating the bad solution. We investigate this below while retaining the “speed limit” aspect of the policy rule, which may have theoretical and practical advantages as argued by Walsh (Reference Walsh2003) and others. We start with interest rate smoothing before turning to unconventional monetary policy rules.
4.2 Interest rate smoothing
We first ask whether policymakers could achieve a better outcome by smoothing the shadow interest rate in (14) by setting
$\rho _i \in (0,1)$
. We started out by checking the regions where the
$M$
matrix is a
$P$
-matrix for
$T=16$
, similar to Figure 5 but with three
$\rho _i$
values and
$\sigma =1$
.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_fig6.png?pub-status=live)
Figure 6. Regions in which
$M$
is not a
$P$
-matrix (black) for
$T = 16$
and various
$\rho _i$
.
The
$P$
-matrix regions in Figure 6 indicate that the determinacy region expands once the interest rate smoothing parameter
$\rho _i$
is large enough. For
$\rho _i=0.8$
, the parameter combination
$\theta _\pi = 1.5$
,
$\theta _{\Delta y}=1.6$
is now in the determinacy region (white), so there is a unique solution at these parameter values (plotted in Figure 7 below). However, indeterminacy remains if the response to the speed limit is strong enough (Figure 6, right) and as noted by Holden (Reference Holden2023, Appendix E), the
$P$
-matrix regions under interest rate smoothing tend to those in the model without any smoothing as
$T$
is increased, such that multiplicity remains a widespread problem and there are both “good” and “bad” equilibria.Footnote
24
In Figure 7, we present a numerical simulation. We set
$\theta _\pi =1.5$
,
$\theta _{\Delta y} = 1.6$
and
$e_1 = 0.01$
as in the baseline simulation in Figure 4; the only difference is that the interest rate smoothing parameter is set at either
$\rho _i = 0.40$
(weak smoothing) or a high value
$\rho _i = 0.80$
(strong smoothing). With moderate interest rate smoothing (
$\rho _i=0.40$
) there are two solutions and the “bad” solution is exacerbated relative to Figure 4: inflation and the output gap fall by around 5 times as much initially and interest rates spend 7 periods at the lower bound, rather than 2. Intuitively, it is easier to induce a lengthy spell at the lower bound when the shadow rate is persistent, provided
$\rho _i$
is not large enough to eliminate the bad solution. In the case of
$\rho _i=0.8$
, there is a unique solution that is away from the bound in all periods (dashed gray line)—though as Figure 6 shows, uniqueness is not a general result.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_fig7.png?pub-status=live)
Figure 7. Perfect foresight solutions with interest rate smoothing when
$e_1=0.01$
,
$i^*_0=y_0=0$
,
$\sigma =1$
and
$\theta _\pi = 1.5$
,
$\theta _{\Delta y}=1.6$
: two different values of
$\rho _i$
(
$\rho _i=0.4$
, solid;
$\rho _i=0.8$
, dashed).
For the parameters
$\theta _\pi = 1.5$
,
$\theta _{\Delta y}=1.6$
, the determinacy result seems to hold for smoothing of around
$\rho _i=0.8$
or higher.Footnote
25
Some intuition can be gained by scaling
$\theta _{\pi }, \theta _{\Delta y}$
in Eq. (14) by
$\frac {1}{1-\rho _i}$
and letting
$\rho _i \rightarrow 1$
. In this case, the shadow interest rate tends to
$\Delta i_t^* = \theta _\pi \pi _t + \theta _{\Delta y} \Delta y_t$
, which is consistent with any rule of the form
$i_t^* = constant + \theta _\pi p_t + \theta _{\Delta y} y_t$
, where
$p_t = \pi _t + p_{t-1}$
is the log price level. The latter is a price-level targeting rule without a speed limit term. Given that Holden (Reference Holden2023, Appendix E) finds that a levels rule restores determinacy, it is intuitive that sufficiently high values of
$\rho _i$
lead to the same conclusion.
In short, interest rate smoothing does not, in general, prevent the occurrence of multiple equilibria, and when the “bad” solution is present we see that a smoothing rule worsens destabilization of inflation and the output gap somewhat. At the same time, however, we saw that highly inertial interest rate rules may eliminate the bad solution.
4.3 Forward guidance
Since interest rate smoothing is not a robust solution to indeterminacy and destabilization, we now consider forward guidance. We are motivated here by the observation that forward guidance promises an extended period of expansionary monetary policy, such that self-fulfilling pessimistic expectations of inflation and the output gap—as under the “bad” solution—might not be rational. This unconventional policy is not studied by Holden (Reference Holden2023).
To model forward guidance, we consider expansionary “news shocks” to the shadow interest rate, such that (14) becomes
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn17.png?pub-status=live)
where
$e_t^{FG} \leq 0$
is a forward guidance “news shock” and
$e_1^{FG} = 0$
.
We assume that
$e_t^{FG}\lt 0$
only in periods when forward guidance is in place. Our question is whether such a policy eliminates the “bad” solution or better stabilizes inflation and the output gap. Table 1 records the percentage of initial conditions (out of 800) for which both a “good” and “bad” solution exist at different forward guidance horizons; here, different initial conditions refers to varying the size of the news shocks
$e_t^{FG}$
within a range and we consider 800 such variations (i.e. cases) for each forward guidance horizon; see the Table 1 notes. By forward guidance horizon we mean the number of periods in which there are “expansionary” news shocks
$e_t^{FG}\lt 0$
and we consider only consecutive periods of such shocks. All other parameters and initial conditions are kept fixed at the baseline values used in Figure 4.
Table 1. Determinacy at various forward guidance horizons (800 cases,
$\rho _i=0$
)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_tab1.png?pub-status=live)
Note: Forward guidance is modeled via news shocks
$e_t^{FG} = -0.01 - Uniform(0,0.01)$
which start in period 2 and last to the stated date. Each row: 800 cases of different news shocks.
The results in Table 1 show that forward guidance is not effective in eliminating indeterminacy. Up to a 5-period forward guidance horizon (rows 1–5), all 800 sets of news shocks led to multiple solutions (i.e. 100% of cases).Footnote 26 If the forward guidance horizon is 5 periods, the length of spells at the zero lower bound is no longer the same across all 800 cases of news shocks (see Table 1, final column) and there can be multiple spells at the lower bound (see Figure 8, bottom right for an example); however, neither a longer horizon nor interest rate smoothing seem to alter the conclusion of widespread multiplicity.
Figure 8 shows that the “good” solutions under forward guidance have inflation and output “overstimulated” (upper panel), while the “bad” solutions have very poor stabilization relative to the original policy given by the solid black line ("Baseline," lower panel). Importantly, stabilization under the “bad” solutions is worse when forward guidance is present and stabilization of inflation and the output gap deteriorates as forward guidance becomes more aggressive (i.e. when it delivers a similar “dose” for a larger number of periods). Hence, while some previous works suggest that forward guidance could be stabilizing at the zero lower bound (e.g. Eggertsson et al. Reference Eggertsson, Egiev, Lin, Platzer and Riva2021), our results point in the opposite direction.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_fig8.png?pub-status=live)
Figure 8. "Good" and “bad” solutions with forward guidance:
$e_1=0.01$
,
$i^*_0=y_0=0$
,
$\theta _\pi = 1.5$
,
$\theta _{\Delta y}=1.6$
,
$\rho _i=0$
,
$\sigma = 1$
. Forward guidance news shocks
$e_t^{FG} = -0.015$
for 2 periods (FG1), 4 periods (FG2), 5 periods (FG3). Start:
$t=2$
and
$e_1^{FG} =0$
in all cases. Baseline: no FG.
4.4 Price-level targeting
Lastly, we consider price-level targeting. A key motivation is work showing that price-level targeting interest rate rules can mitigate or resolve indeterminacy in New Keynesian models (Giannoni, Reference Giannoni2014; Holden, Reference Holden2023). We ask whether a response to the price level is sufficient to restore determinacy when retaining the “speed limit” term in the interest rate rule. We retain the speed limit because policymakers may find such policies attractive (Walsh, Reference Walsh2003), but inadvertently bring about multiplicity (Figures 4–8); this is a problem we want to solve.
Accordingly, we assume the shadow interest rate under price-level targeting is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn18.png?pub-status=live)
where
$\theta _p\gt 0$
and
$p_t\,:\!= \pi _t + p_{t-1}$
is the log price level.
Differently from the rule in Holden (Reference Holden2023, Appendix E), the shadow interest rate still responds to the change in output gap. We now consider implications for determinacy.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_fig9.png?pub-status=live)
Figure 9. Regions in which
$M$
is not a
$P$
-matrix (black): price-level targeting when
$T=16$
. Note that
$\theta _p$
is the response coefficient on the log price level and
$\rho _i=0$
.
Figure 9 plots the regions in which
$M$
is a
$P$
-matrix, again for
$T=16$
as in Figure 5. The indeterminacy region shrinks substantially, but we see that indeterminacy is not ruled out when a strong response to the “speed limit” is combined with a relatively weak response to the price level (black regions). Thus, only a sufficiently strong interest rate response to the price level—a large enough
$\theta _p$
in (18)—ensures determinacy. Intuitively, this says that a long-lasting, aggressive expansionary policy to restore the price level to target is sufficient to rule out the “bad” equilibrium based on pessimistic expectations. Perfect foresight simulations suggest that the “good solution” under price-level targeting (with the nominal rate away from the bound) has good performance in terms of stabilization, and allowing interest rate smoothing in the rule reinforces this conclusion (see Supplementary Appendix, Section 3.4).
Interestingly, if the reaction to the price level
$\theta _p$
is small enough for both solutions to exist, then a very weak response to the price level works well from a stabilization perspective. In this case, the “bad” solution under price-level targeting looks somewhat “better” than with the inflation targeting, interest rate smoothing or forward guidance rules (see Figure 10). We see that inflation and the output gap drop far less initially (i.e. in period 1), and these variables and interest rates then oscillate around the “good” solution, but within quite a narrow range and deviations “die out” quite quickly. It should be noted, however, that not all “bad” solutions under price-level targeting are “tamer” than for the original rule.Footnote
27
Notably, our conclusions on price-level targeting are less positive than in Holden (Reference Holden2023), where analytical and numerical results are used to show that a price-level targeting rule ensures determinacy in a range of New Keynesian models. What our results highlight is that the assumption of a response to the level of the output gap, rather than its first-difference, is crucial. In the present model, where the “speed limit”
$y_t-y_{t-1}$
enters the interest rate rule, determinacy is not guaranteed but instead requires a sufficiently strong response to the price level relative to the response to the speed limit, as shown in Figure 9. A particular example of multiplicity when the response to the price level is “too weak” can be seen in Figure 10.
4.5 Welfare analysis
Finally, we consider some welfare implications of multiplicity, an issue not considered in Holden (Reference Holden2023). We make use here of the “expected outcomes” approach in Section 3.2. In the above model, the microfounded social loss function has the form:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn19.png?pub-status=live)
where
$\lambda \gt 0$
is the relative weight on output gap variations.Footnote
28
As we have seen, there are two perfect foresight solutions for some parameter values. In Table 2, we report the losses under each policy rule based on (19), for both the “good” solution (Solution 1) and the “bad” solution (Solution 2, when it exists), i.e.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn20.png?pub-status=live)
where
$k=1$
denotes the good solution and
$k=2$
denotes the bad solution.
Table 2. Welfare losses and policy rules: good and bad solutions (
$\lambda =0.1$
)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_tab2.png?pub-status=live)
Note: IT1 is the baseline case in Figure 4. IT2 adds interest rate smoothing with
$\rho_i = 0.4$
(Figure 7). FG1 and FG2 shown and described in Figure 8. PLT1 (PLT2) sets
$\theta_p=1.5$
(
$\theta_p=0.015$
, see Figure 10). Reported losses are computed as the ratio of the loss relative to the ‘good’ loss
$L_1$
for rule IT1.
For each type of policy rule we consider two parameterizations and other parameters and initial conditions are set at their baseline values as in the exercises in Figures 4, 7, 8, 10. We see that the price-level targeting rule PLT1 (
$\theta _p=1.5$
) gives the lowest loss among the rules shown and restores determinacy (penultimate row); it also outperforms an interest rate smoothing rule with
$\rho _i =0.8$
(Figure 7) or higher.Footnote
29
Even when the price level response is very small (
$\theta _p = 0.015$
, Figure 10)—such that multiple solutions arise—the loss under the good solution is much smaller than under the forward guidance rules (at 3.5 times the loss under the baseline “good solution”), while the bad solution imposes a relatively small welfare loss compared to the alternatives. These main conclusions are not sensitive to the relative weight
$\lambda$
on output gap deviations in the loss function (which is set at 0.1 in Table 2).Footnote
30
We now consider expected social welfare losses in the above model. To do so, we follow the “expected outcomes” approach of Section 3.2 that computes a weighted average in which the weights are the probabilities that expectations will coordinate on each solution. The latter can be viewed as an expected outcome from some initial position or state (period 0) in which it is not yet known which perfect foresight solution agents’ expectations will coordinate on. Therefore, we can think of this welfare exercise as an assessment made by policymakers (or researchers) who are viewing the economy from under the “veil of ignorance.”
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_fig10.png?pub-status=live)
Figure 10. “Good” and “bad” solutions under price-level targeting for
$e_1=0.01$
,
$i^*_0=y_0=0$
and a “weak” response to the price level
$\theta _p = 0.015$
when
$\sigma = 1$
,
$\theta _{\Delta y}=1.6$
and
$\rho _i=0$
.
We stick with the same policy rule (and parameter values) as in Table 2. Given (20) and a probability
$p_1$
of Solution 1, the expected loss under a given rule is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_eqn21.png?pub-status=live)
where
$L_1=L_2$
in the case of rules for which there is a unique solution.
In Figure 11, we plot the expected welfare losses under each rule. While the price-level targeting rule PLT1 (
$\theta _p=1.5$
) performs best as expected, there is no clear ranking among the other rules because the expected losses are sensitive to the probability
$p_1$
of Solution 1 (both panels). The “weak response” price-level targeting rule PLT2 (
$\theta _p=0.015$
) is ranked second for all values of
$p_1$
up to approx. 0.9998 (right panel), but is then outperformed by the inflation targeting rules IT1 (no smoothing), and as the probability
$p_1$
approaches 1, the inflation targeting rule IT2 (smoothing,
$\rho _i=0.4$
) outperforms both the PLT2 rule and IT1, though this requires that the bad solution is a near-zero probability event.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250210163915811-0236:S1365100525000021:S1365100525000021_fig11.png?pub-status=live)
Figure 11. Expected welfare losses as the probability of Solution 1,
$p_1$
, is varied. The different interest rate rules are the same ones as in Table 2. The left panel uses a log scale.
Overall, the performance of price-level targeting is quite robust compared to the other policies for which losses are very sensitive to the probability that agents coordinate expectations on the “good” solution, 1. If the “bad equilibria” have non-trivial probability, price-level targeting looks highly attractive; however, if the such equilibria are considered extremely unlikely, then the potential benefits of price-level targeting look smaller and might not be convincing to policymakers. Thus, we see the potential value of assessing robustness of expected welfare to the probability of the good solution, as in Figure 11.
Finally, a word of caution is in order. It should be noted that since the “good” and “bad” solutions differ under each policy rule, the probabilities of each solution could also differ in these cases; for example, the “tame” bad solution under price-level targeting with
$\theta _p=0.015$
(Figure 10) might be considered more plausible than the highly destabilizing bad solution under inflation targeting with moderate interest rate smoothing (Figure 7,
$\rho _i = 0.4$
). If so, then an analysis like that in Figure 11 is still useful, but with the caveat that the expected welfare losses under each rule should be computed conditional on the different probabilities of each solution that the researcher or policymaker has in mind.Footnote
31
5. Conclusion
In this paper we have extended the guess-verify algorithm in Guerrieri and Iacoviello (Reference Guerrieri and Iacoviello2015) to detect and simulate multiple perfect foresight equilibria of otherwise-linear dynamic models with news shocks. We showed how to compute expected paths in models with multiple equilibria using a “prior probabilities” approach, and also how to run stochastic simulations in which there is switching between equilibria on the simulated path.
We illustrated our algorithm—and the above extensions—using a simple Fisherian model with “high” and “low” inflation solutions for a range of initial values. We also presented a policy application based on a New Keynesian model with a zero lower bound, a policy response to the change in the output gap (or “speed limit”) rather than its level, and multiple equilibria for some parameter values. One of these equilibria is a “bad” solution for which self-fulfilling pessimistic expectations drive down inflation and the output gap, and interest rates spend some time at the lower bound. Multiplicity arises for a wide range of parameter values with an inflation targeting rule, and rules with interest rate smoothing or forward guidance do not provide robust solutions to multiplicity and existence of a bad solution.
However, replacing an inflation target in the interest rate rule with a price-level target shrinks the indeterminacy region substantially. Our results suggest a simple rule-of-thumb: a strong enough response of interest rates to the price level avoids indeterminacy by eliminating the bad solution. Further, a price-level targeting rule is quite robust: when the “bad solution” does exist, it is not highly deflationary as under the other policies provided the price level response is small enough. Price-level targeting also performs well against the alternatives in terms of welfare, including a measure of expected welfare under the veil of ignorance.
The above results highlight some uses of our algorithm, such as policy analysis and simulation, and suggest some interesting avenues for future research.
Acknowledgements
For useful comments I thank Alessandro Mennuni, Serhiy Stepanchuk, Tom Holden, Michael Reiter, Andrew Gimber, Federico Di Pace, conference participants at the CEF 2023 (Nice) and MMF 2023 (Portsmouth) and seminar participants at the Bank of England and the Central Bank of Ireland.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/S1365100525000021