The Taylor rule as a benchmark for the interest-rate setting
Forward-looking variables in macroeconomic models
Estimation of monetary policy reaction function
Extensions and evidence
Zero-lower bound and monetary policy rules
Monetary policy and monetary policy rules
The Great Inflation of the 1970’s: Inflation perceived as most important issue of the macroeconomic policy.
Policy makers and researchers attempted to develop appropriate policies that would decrease the inflation rates and that would prevent emergence of inflationary pressures in the future.
Key lessons from the new models: importance of anchored inflation expectations and credible monetary policy.
Proposed solution: Policy makers shall follow rules providing guidance for policy making and they shall avoid discretion.
Two types of the rules – focused on money supply and on interest rate
Monetary policy and monetary policy rules: Three generations
Money supply rules: Based on the belief that stable growth of money supply would lead to stable inflation (monetarism).
The growth rate of money supply should be the sum of the target inflation and the desired growth rate of output, or it should be simply an aim to keep the money stock growing steadily at 2 or 3% annual rate. Abandoned quickly in the 1980’s, due to inability to control the broad money stock (M2) and instability of the money-inflation-activity nexus.
Interest rate rules: The short-term interest rate expressed as a function of the output gap and the deviation of the inflation rate from its target.
Advantage: interest rates can be easily adjusted by the central bank via open market operations so they are under direct control of bankers.
Inflation targeting: Focused mostly on explicit inflation targeting based on public announcement of targeted inflation rate and monetary policy actions aimed at fulfilling of such target. Despite criticism, inflation targeting remains the leading approach to monetary policy (ECB and Fed confirmed in revised frameworks in 2020/2021).
Monetary policy and monetary policy rules
Although no rule could fully capture what the central bank does, interest rate rules provide a reasonable benchmark (i.e. which level of interest rate would correspond to prevailing economic conditions) and a framework to infer actual central bank behavior empirically (monetary policy reaction function).
Interest rate rules somewhat relevant for the period of inflation targeting. The short-term interest rate remained the main instrument of monetary policy until the Great Recession => it still makes sense to estimate monetary policy reacton functions.
The unconventional policies are usually adopted when interest rates hit the zero-lower bound. Both, large scale asset purchases and forward guidance, affect interest rates at longer horizons => this allowed us to calculate the so-called shadow rates.
Interest rate rules can be used for description of the central banks’ actions empirically.
Taylor rule
Interest rate rule proposed by John Tylor in 1993. Two elements reflect the dual mandate of the Fed:
First, the nominal interest rate should rise more than one-by-one with inflation, so that the real interest rate increases when inflation rises.
Second, the desired interest rate \(i^*\) should be increased when when output is above normal and fall when output is below normal.
where \(β = 0.5\), \(γ = 0.5\) and for both, the real interest rate and inflation target holds: \(r^n = π^* = 2\%\).
This rule provides a good description of the U.S. monetary policy from the 1980’s till the 2000s when the Fed aimed to use interest rates in order to keep inflation and output stable. Values predicted by the rule serve as benchmark.
Data for Taylor rule
library(xts)library(pdfetch) #Library for loading FRED datalibrary(ggplot2) #Library for plottinglibrary(mFilter) #Library for HP filterlibrary(tsbox) #Library for time series conversiondata_tr <-pdfetch_FRED(c("GDP", "GDPC1", "GDPPOT", "FEDFUNDS", "CPIAUCSL", "PPIACO", "CPILFESL", "M2SL"))data_tr <-to.period(data_tr, period ="quarter", OHLC =FALSE)data_tr <-ts_first_of_period(data_tr) #to assure consistency in dates across time series objects#Transformationsdata_tr$lgdp <-log(data_tr$GDPC1) # Log of real GDPdata_tr$lgdp_pot <-log(data_tr$GDPPOT) # Log of potential GDPdata_tr$gdpgap <-100*(data_tr$lgdp - data_tr$lgdp_pot) # Gap as a difference between actual and potential outputhp_gdp <-hpfilter(data_tr$lgdp, freq =1600, type="lambda")data_tr$gdpgap_hp <-100*hp_gdp$cycledata_tr$l_cpi <-log(data_tr$CPIAUCSL)data_tr$l_cpi_core <-log(data_tr$CPILFESL)data_tr$l_ppiaco <-log(data_tr$PPIACO)data_tr$fedfunds <- data_tr$FEDFUNDSdata_tr$m2 <- data_tr$M2SLdata_tr$m2_l <-log(data_tr$m2)data_tr$m2_ldiff <-100*diff(data_tr$m2_l,4)data_tr$ppiaco_ld <-100*diff(data_tr$l_ppiaco,4)#Because GDPPOT has not been updated to 2017 dollars at the BEA, yet, gdpgap from hp filter is useddata_tr$gdpgap <- data_tr$gdpgap_hp#Inflation rates and inflation expectations#Based on GDP deflatordata_tr$deflator_l <-log(100*(data_tr$GDP/data_tr$GDPC1))data_tr$inf_def <-100*diff(data_tr$deflator_l, 4)data_tr$infexp_def <-lag(data_tr$inf_def, k=-2) # negative lag of an xts object creates lead, that is inflation(t+2)#Based on CPI inflationdata_tr$inflation <-100*diff(data_tr$l_cpi, 4)data_tr$infrate <-1/4*(data_tr$inflation +lag(data_tr$inflation, k=1) +lag(data_tr$inflation, k=2) +lag(data_tr$inflation, k=3))data_tr$infexp <-lag(data_tr$inflation, k=-2) # negative lag of an xts object creates lead#Federal funds rate consistent with the Taylor rule (GDP deflator)data_tr$TaylorRule_def <-2+ data_tr$inf_def +0.5*data_tr$gdpgap +0.5*(data_tr$inf_def-2)#Federal funds rate consistent with the Taylor rule (CPI)data_tr$TaylorRule_cpi <-2+ data_tr$infrate +0.5*data_tr$gdpgap +0.5*(data_tr$infrate-2)
Unconventional monetary policy and the shadow rate
At the onset of the Great Recession, many central banks quickly decreased the interest rates to support economic activity. However, the interest rate response was limited by the zero lower bound of interest rates.
But - Taylor rule implied negative nominal interest rates in the Great recession and during the COVID pandemic.
Therefore, many central banks launched unconventional monetary policies that aimed to decrease interest rate spread and the long-term interest rates. Central banks also started to buy other assets than short-term government bonds. => Quantitative easing.
Additionally, most central banks comitted themselves for low interest rates in the future (so called forward guidance; likely to affect expectations and long-term interest rates).
Central banks in Switzerland, Israel and currently also in the Czech Republic opted for exchange rate interventions aimed at depreciation of domestic currencies, other central banks (Switzerland, Sweden) experimented with moderately negative interest rates.
Unconventional monetary policy and the shadow rate
Utilization of unconventional monetary policies implies that the interest rate no longer represents the monetary policy stance at the ZLB.
To permit a comparison of the effect of unconventional monetary policies with the conventional interest rate policies, the shadow rate models were introduced (Bullard [2012] and Krippner [2013], Wu-Xia [2016]).
Intuition of the Wu-Xia model:
Use forward rates corresponding to 1/4, 1/2, 1, 2, 5, 7, and 10 years calculated from the yield curve.
Use the Nelson-Siegel model to derive unobserved common factors driving behavior of the yield curve.
The shadow rate is assumed to be a linear function of these factors.
In short, it is a behavior of the short-term interest rate derived from long-term interest rates.
#Shadow rate - construction from the federal funds rate and the shadow rate from Cynthia Wu's websitedata_shadow6008 <- data_tr$FEDFUNDS["1959-01-01/2008-12-31"]data_shadow6008 <-ts_first_of_period(data_shadow6008)data_shadow0915 <-ts(c(0.75,0.02,-0.41,-0.15,-0.48,-0.54,-0.80,-0.88,-0.99,-1.12,-1.40,-1.47,-1.27,-1.11,-1.36,-1.43,-1.44,-0.97,-1.80,-2.13,-2.62,-2.89,-2.81,-2.42,-1.81,-1.38,-0.74,-0.004),start=c(2009,01), frequency =4)data_shadow0915 <-ts_xts(data_shadow0915)data_shadow1620 <- data_tr$FEDFUNDS["2016-01-01/2020-03-01"]data_shadow2021 <-ts(c(0.40,0.08,-0.29,-1.56,-1.83,-1.81,-1.15), start =c(2020,02), frequency =4)data_shadow2021 <-ts_xts(data_shadow2021)data_shadow22end <- data_tr$FEDFUNDS["2022-01-01/"]data_tr$shadowrate <-rbind(data_shadow6008,data_shadow0915,data_shadow1620,data_shadow2021,data_shadow22end)
Taylor rule with the shadow rate: Data
Taylor rule with the shadow rate
Estimating Monetary Policy Reaction Function
The parameters of the interest rule can be estimated => monetary policy reaction function.
Obtained parameters can be compared with the benchmark rule, subsample analysis might provide information about changes in priorities of the central bank as well.
Estimation issues: which output gap and inflation to choose.
Extension: Central banks aim to react not to current inflation but rather to expected inflation that could be influenced by change in monetary policy (Clarida, Galí, Gertler, QJE, 2000):
where \(k\) is the horizon with common values ranging from 2 to 4 quarters and \(Ω_t\) represents the information set available to policy makers at time \(t\) . The \(i^n\) is the long-run equilibrium nominal interest rate and \(π^*\) is the inflation target. Output gap is usually considered at time \(t\) – despite the measurement issues.
Estimating Monetary Policy Reaction Function
We can restate the forward-looking interest rate rule in terms of the real interest rate \(r\):
where \(r^n\) is the equilibrium real interest rate.
This implies that if the central bank aims to stabilize inflation, the real interest rate has to increase with increasing deviation of inflation from the target. Therefore, to stabilize inflation, the coefficient \(β\) needs to be higher than 1.
Note that \(i^* = r^* + E_t(π_{t+k}|Ω_t)\) and \(i^n = r^n + π^*\).
Estimating Monetary Policy Reaction Function
Following Clarida, Galí, Gertler (2000) we assume there is also an interest rate smoothing, based on a belief that the central banks tend to move towards their targeted interest rates gradually:
\[
i_t = a + \rho i_{t-1} + b (E_t (\pi_{t+k} \mid \Omega_t) + c (log Y_t - log Y_t^n)
\]
The last equation can be estimated. The intercept \(a\) is \((1-ρ)(i^n-βπ^*)\).
The model can be augmented by additional variables, such as asset prices (interest rate spreads most relevant) or exchange rate.
Forward-looking variables in macroeconomic models
\[
i_t = a + \rho i_{t-1} + b (E_t (\pi_{t+k} \mid \Omega_t) + c (log Y_t - log Y_t^n)
\]
Assuming rational expectations, the expected inflation can be replaced by future values of inflation \(π_{t+k}\) and correct for potential measurement error using instruments.
The instruments should mimic the information set available at time \(t\), so it contains lags of various macroeconomic variables.
The model with instrumental variables can be estimated using various techniques like TSLS or GMM.
GMM is currently the more popular technique in the macro applications and this technique is covered in Advanced Econometrics, summary in the Appendix.
Data
United States, quarterly data.
Dependent variable: short-term nominal interest rate (fed funds rate), output gap retrieved as the difference between GDP and potential GDP from the BEA, inflation is based on GDP deflator, average rate over past four quarters.
List of instruments: lags of output, inflation, commodity price index, federal funds rate, growth of money supply, log of nominal effective exchange rate.
Weak instruments? Detection using F-statistics, should be > 10.
#Sample starting in 1979Q3 and ending before the Great Recession#Note that few more quarters are included so that lags are computed for all observations in the sample#Those observations are pretty crucial due to strong anti-inflationary measures in late 1979data_tr_7908 <- data_tr["1978-10-01/2008-07-01"]tr3 <-gmm(data_tr_7908$fedfunds ~ data_tr_7908$infexp + data_tr_7908$gdpgap +lag(data_tr_7908$fedfunds, k=1), ~lag(data_tr_7908$infrate, k=1:4) +lag(data_tr_7908$gdpgap, k=1:4) +lag(data_tr_7908$m2_ldiff, k=1:4) +lag(data_tr_7908$fedfunds, k=1:4)+lag(data_tr_7908$ppiaco_ld, k=1:4) +lag(data_tr_7908$gdpgap, k=1:4))summary(tr3)
Call:
gmm(g = data_tr_7908$fedfunds ~ data_tr_7908$infexp + data_tr_7908$gdpgap +
lag(data_tr_7908$fedfunds, k = 1), x = ~lag(data_tr_7908$infrate,
k = 1:4) + lag(data_tr_7908$gdpgap, k = 1:4) + lag(data_tr_7908$m2_ldiff,
k = 1:4) + lag(data_tr_7908$fedfunds, k = 1:4) + lag(data_tr_7908$ppiaco_ld,
k = 1:4) + lag(data_tr_7908$gdpgap, k = 1:4))
Method: twoStep
Kernel: Quadratic Spectral(with bw = 1.5659 )
Coefficients:
Estimate Std. Error t value
(Intercept) 2.5687e-01 9.4427e-02 2.7203e+00
data_tr_7908$infexp 1.8405e-01 3.7535e-02 4.9033e+00
data_tr_7908$gdpgap 1.7248e-01 3.5422e-02 4.8694e+00
lag(data_tr_7908$fedfunds, k = 1) 8.5421e-01 1.8082e-02 4.7242e+01
Pr(>|t|)
(Intercept) 6.5224e-03
data_tr_7908$infexp 9.4233e-07
data_tr_7908$gdpgap 1.1195e-06
lag(data_tr_7908$fedfunds, k = 1) 0.0000e+00
J-Test: degrees of freedom is 17
J-test P-value
Test E(g)=0: 20.93714 0.22911
Initial values of the coefficients
(Intercept) data_tr_7908$infexp
0.12168559 0.31292090
data_tr_7908$gdpgap lag(data_tr_7908$fedfunds, k = 1)
-0.04385533 0.79284714
Implied Taylor parameters: \(\alpha = 1.74\) , \(\beta = 1.26\), \(\gamma = 1.17\).
(with GDP gap based on BEA potential output: \(\alpha=2.07\), \(\beta = 1.7\), \(\gamma = 1.05\) ).
Call:
gmm(g = data_tr_9122$shadowrate ~ data_tr_9122$infexp + data_tr_9122$gdpgap +
lag(data_tr_9122$shadowrate, k = 1), x = ~lag(data_tr_9122$infrate,
k = 1:3) + lag(data_tr_9122$gdpgap, k = 1:3) + lag(data_tr_9122$m2_ldiff,
k = 1:3) + lag(data_tr_9122$shadowrate, k = 1:3) + lag(data_tr_9122$ppiaco_ld,
k = 1:3) + lag(data_tr_9122$gdpgap, k = 1:3))
Method: twoStep
Kernel: Quadratic Spectral(with bw = 1.05472 )
Coefficients:
Estimate Std. Error t value
(Intercept) 7.8993e-02 1.2164e-01 6.4938e-01
data_tr_9122$infexp 1.0541e-02 3.8089e-02 2.7674e-01
data_tr_9122$gdpgap 1.8977e-01 3.9323e-02 4.8259e+00
lag(data_tr_9122$shadowrate, k = 1) 9.8287e-01 1.2877e-02 7.6328e+01
Pr(>|t|)
(Intercept) 5.1609e-01
data_tr_9122$infexp 7.8198e-01
data_tr_9122$gdpgap 1.3940e-06
lag(data_tr_9122$shadowrate, k = 1) 0.0000e+00
J-Test: degrees of freedom is 12
J-test P-value
Test E(g)=0: 15.73773 0.20355
Initial values of the coefficients
(Intercept) data_tr_9122$infexp
-0.06838317 0.06484906
data_tr_9122$gdpgap lag(data_tr_9122$shadowrate, k = 1)
0.14576037 0.95960767
Implied Taylor parameters: \(\alpha = 4.38\) , \(\beta = 0.58\), \(\gamma = 10.5\) (looks high, but it is down from \(\gamma = 11.75\) with last-year data; with GDP gap from BEA and data till the mid 2022 \(\alpha=4.89\), \(\beta=−0.21\), \(\gamma =1.93\))
Extensions and Evidence
Extensions to baseline exercise: explicit estimate of the natural interest rate (Clarida-Galí-Gertler, 2000).
Acounting for time-variance: natural interest rate (Laubach and Williams, REStat, 2003), inflation target (Leigh, JEDC, 2008) and coeffcients at inflation and output gap (Kim-Nelson, JME, 2006). These studies suggest that the natural interest rate gradually decreased and the same holds for the (implicit) inflation target that moved from 4% to 2% around mid 1990’s.
The time-varying estimates of β in Kim and Nelson confirms the results from our sub-sample analysis with a break in 1979.
Taylor rule can be fitted or estimated also for other countries and for the results usually mimic the pattern of the U.S. with passive monetary policy usually till 1980’s, aggressive disinflation in the 1980’s or slightly later until inflation has been stabilized.
The evidence for the 2000’s rather mixed. The U.S. departed from the Taylor rule after 2001, in other countries low and stable both inflation and interest rates as well, so the estimates of β are rather low.
Extensions and Evidence
Real-time vs. Ex-post data and monetary policy rules.
A. Orphanides (AER, 2001) argues that estimated monetary policy reaction function might provide biased view over the actual monetary policy because of data revisions (mainly on output gap).
Furthermore, in subsequent paper (AER, 2002), he demonstrates that if the real-time data are used to calculate interest rates consistent with the rule, those rates are quite consistent with the benchmark Taylor rule even in the 1970’s, in a period often characterised by loose monetary policy. Therefore, his result suggest that the inflation resulted from bad luck rather than from bad policy.
However, his conclusions about the U.S. monetary policy of the 1970’s are not supported by authors who allow for time-variation in coefficients (Boivin, 2005; Kim-Kishor-Nelson, 2006). On top of that, Coibin and Gorodnichenko (2008) have shown that when upward-sloping inflation trend is present, β > 1 does not assure assure stabilization.
While the debate of the bad luck and bad policy remains open, the importance of the distinction between real-time and ex-post data is widely acknowledged.
Takeaways
Monetary rules can be used not only as simple policy guidance and benchmark but also to estimate what the central banks actually do.
Expectations in macro models => the case for an IV estimator, either TSLS or GMM.
The results for the U.S. reveal strong subsample instability.
Since the Great Recession, the main central banks concerned more by disruptions in the financial sector and low inflation so the interest rates reached zero lower bound and remained there till 2016/2017. The period was characterized by unconventional policies.
These unconventional policies likely behind sustained asset price boom since the 2007/2008 crisis. To what extent there is a bubble on the markets remains debatable.
Changes in monetary policy conduct: Inflation targeting remained, supplemented by macroprudential policies; broader central banks mandates.
COVID crisis: surprisingly short-lived, followed with sharp rise of inflation, response of the main central banks wait-and-see for a while, followed by rapid interest rate hikes.
Additional slides
IV using the GMM
Alternative to TSLS: Method of moments. Several formulas from the method of moments illustrate the logic behind instrument variables and provide an insight to tests and selection of variables in each tests.
What is “moment”? The k-th moment of random variable Y, denoted as \(μ_k\), is the expected value of this variable raised to the k-th power.
The 1st moment => mean: \(E(Y^1)=μ_1\)
The 2nd moment centered around mean => variance: \(E((Y-μ_1)^2)\)
In large samples, the sample moments converge to population moments, hence the 1st moment can be calculated as sample mean \((Σy_i/n)\).
These simple rules can be used for estimation of linear and more complex models.
IV using the GMM
OLS as moment estimator: Consider a linear model \(y_i = β_1 +β_2 x_i+e_i\). We can make following natural assumptions about properties of the error term: \(E(e_i)=0\) and \(E(x_i e_i)=0\).
These conditions can be written in terms of population moments and sample moments:
The sample moments that can be used for estimation. Note that the expressions for the sample moments are the same formulas as the first order conditions of OLS! Hence, for linear models OLS and moment estimator are equivalent.
IV using the GMM
Moment conditions for the linear model: \(E(e_i)=0\) and \(E(x_ie_i)=0\).
When the variables are endogenous, the second condition is no longer satisfied. So we decided to use instrumental variables \(Z\) that fulfill the condition \(E(z_i e_i)=0\) , and are still correlated with the original regressors \(X\).
But the property of the instumental variables \(E(z_i e_i)=0\) is very similar to moment conditon of the linear model \(E(x_ie_i)=0\)! Hence, using the approach of the method of moments, we simply replace the second moment condition of the linear model.
Moment conditions for linear model with endogenous variable and appropriate instruments become
\(E(e_i)=0\)\(~ ~\) and \(~ ~\)\(E(z_ie_i)=0\)\(~ ~\) implies
As in case of the linear model we arrived to a system of two equations with two unknonwn parameters \(β_1\)and\(β_2\) and this estimator is unbiased and consistent by assumption since these two properties \(E(e_i)=0\) and \(E(z_i e_i)=0\) were used for construction of the moment estimator.
If more instruments for one variable => 3 (and more) equations for 2 variables and in practise the computational algorithm weights the results of each combination of two equations (the General Method of Moments, GMM). This also shows the importance to test for the overidentifying restrictions that measure how different are the results for each particular combination of parameters.
IV Using the Method of Moments
GMM and TSLS estimators converge in large samples. Which one to prefer? Large samples: GMM. Small samples – depends on context.
Econometric textbooks suggest TSLS in smaller samples.
In macro, GMM often prefered on samples with more than 50 observations.
Econometric software packages usually allow for TSLS, GMM and also maximum likelihood estimation.