CONSISTENT VARIANCE OF THE LAPLACE‐TYPE ESTIMATORS: APPLICATION TO DSGE MODELS

Published date01 May 2016
AuthorDenis Nekipelov,Anna Kormilitsina
Date01 May 2016
DOIhttp://doi.org/10.1111/iere.12169
INTERNATIONAL ECONOMIC REVIEW
Vol. 57, No. 2, May 2016
CONSISTENT VARIANCE OF THE LAPLACE-TYPE ESTIMATORS: APPLICATION
TO DSGE MODELS
BYANNA KORMILITSINA AND DENIS NEKIPELOV1
Southern Methodist University, U.S.A.; University of Virginia, U.S.A.
The Laplace-type estimator has become popular in applied macroeconomics, in particular for estimation of dynamic
stochastic general equilibrium (DSGE) models. It is often obtained as the mean and variance of a parameter’s quasi-
posterior distribution, which is defined using a classical estimation objective. We demonstrate that the objective must be
properly scaled; otherwise, arbitrarily small confidence intervals can be obtained if calculated directly from the quasi-
posterior distribution. We estimate a standard DSGE model and find that scaling up the objective may be useful in
estimation with problematic parameter identification. It this case, however, it is important to adjust the quasi-posterior
variance to obtain valid confidence intervals.
1. INTRODUCTION
In spite of the popularity of medium-scale dynamic stochastic general equilibrium (DSGE)
models in empirical macroeconomic research, their estimation is often associated with practical
difficulties. For an applied researcher, the problems with estimation range from the possibility
of multiple local solutions to poor identification of model parameters due to the flatness of the
objective function in the vicinity of the extremum. In estimations with classical objectives, it
has become popular to rely on Bayesian methods by using the Laplace-type estimator (LTE).
(See Christiano et al., 2010; Coibion and Gorodnichenko, 2011; Kormilitsina, 2011; Schmitt-
Groh´
e and Uribe, 2011, among others.) The LTE is a Bayesian alternative to the classical
extremum estimators. It consists in formulating the so-called “quasi-likelihood” function based
on a prespecified statistical criterion, which could be derived from the general method of
moments (GMMs) objective, the maximum likelihood, or another classical estimator. The
quasi-likelihood function implies the quasi-posterior distribution of model parameters, which
can be evaluated using Markov Chain Monte Carlo (MCMC) algorithms, and the estimate is
then obtained as the mean or a quantile of the quasi-posterior distribution.
The popularity of the LTE is largely due to the result in Chernozhukov and Hong (2003),
who demonstrate that the estimator is both theoretically and computationally attractive. From
the computational perspective, the LTE allows one to overcome the curse of dimensionality
problem related to the search of the extremum in classical estimation, because it relies on MCMC
methods instead of costly search procedures. From the theoretical point of view, Chernozhukov
and Hong (2003) establish that under mild assumptions, the LTE is asymptotically equivalent
to the corresponding frequentist extremum estimator. Moreover, if the generalized information
equality (GIE) holds, then the variance of the quasi-posterior distribution provides a consistent
estimate for the variance of the corresponding frequentist estimator. However, if the GIE
does not hold, then the variance of the parameter estimate cannot be approximated by the
Manuscript received February 2014; revised November 2014.
1This article has benefited from discussions with Han Hong, Atsushi Inoue, Frank Schorfheide, and James Stock.
We would also like to thank the editor, Jesus Fern´
andez-Villaverde, and two anonymous referees for their insightful
comments. Please address correspondence to: Anna Kormiltsina, Southern Methodist University, 3300 Dyer Street,
Suite 301, Umphrey Lee, Dallas, TX 75275. E-mail: annak@smu.edu.
603
C
(2016) by the Economics Department of the University of Pennsylvania and the Osaka University Institute of Social
and Economic Research Association
604 KORMILITSINA AND NEKIPELOV
quasi-posterior distribution. Instead, one should transform the quasi-posterior variance using
the “sandwich formula” in Chernozhukov and Hong (2003).2
In this article, the focus is on situations where the GIE is not satisfied. More specifically,
we study the LTE derived using a GMM objective. These estimators are popular in empirical
macroeconomic research; however, it is often difficult to ensure the GIE in these problems,
because an efficient weighting matrix cannot be reliably computed given the sample size in
these applications.3Because relying on efficient weighting may significantly hinder the small
sample performance of the estimator, researchers often resort to diagonal or other inefficient
weighting matrices in formulating the GMM objective.
Within the class of GMM problems, our contribution is the following: First, we demonstrate
that even when the weighting matrix is efficient, the GIE may fail if the objective function is not
scaled correctly. We show that although in classical GMM estimation, the scaling of the objective
function is not essential for the calculation of variance, proper scaling is crucial in LTE, as it
modifies the quasi-posterior distribution. In particular, larger scaling implies smaller variance
of the quasi-posterior distribution. We therefore conclude that one can calculate confidence
intervals directly from quasi-posterior distributions only in efficient estimation problems with
proper scaling of the objective function.
Our second contribution is of practical nature. We find that in empirical applications, it may
be optimal to force deviation from the GIE by scaling up the objective function. In an empirical
exercise, we estimate a simple DSGE model using real and simulated data. We first document
that the variance of the quasi-posterior distribution is generally inversely proportional to the
scaling parameter. Moreover, the variance of the LTE calculated by properly transforming the
variance of the quasi-posterior distribution is robust to the choice of the scaling parameter.
However, we find that these conclusions fail when the scaling is absent (μ=1). In this case,
both the variance of the MCMC chains and the variance of the estimator are usually greater
than those at μ>1, contrary to the predictions of theory. This result is indicative of the
poor performance of the unscaled LTE, which we relate to the presence of poorly identified
parameters and small samples. We confirm this idea in a Monte Carlo experiment where we
repeatedly estimate the model using artificially generated data sets. We find that increasing the
scaling parameter of the objective function allows one to reduce both the bias and variance
of parameter estimates. We therefore conclude that in empirical applications, the scaling of
the objective can be used as an instrument to improve the outcome of estimation. It has to be
emphasized, however, that confidence intervals of the estimator in this case must be obtained
by appropriately transforming the variance of the quasi-posterior distribution.
Implementation of the LTE parallels that in the Bayesian estimation, which has also become
a popular approach in empirical macroeconomics (see, for example, An and Schorfheide, 2007;
Fern´
andez-Villaverde, 2010; Aruoba and Schorfheide, 2011; Fern´
andez-Villaverde et al., 2012;
and references therein). The uniqueness of the LTE, however, is that it relies on Bayesian
methods to address alternative, classical estimation problems.4The LTE based on the maxi-
mum likelihood estimator is most similar to the Bayesian estimation methods commonly used
to estimate DSGE models. Both the LTE and the Bayesian approach therefore face similar
difficulties in empirical applications, stemming from problematic parameter identification and
short data samples. However, although scaling of the quasi-likelihood function may help re-
solve these problems for LTE, it cannot be helpful for Bayesian estimation. The reason is that
the Bayesian approach assumes that the structural parameters are of a stochastic, instead of
a deterministic, nature. This means that a Bayesian economist is interested in evaluating the
2See theorems 2 and 4 in Chernozhukov and Hong (2003).
3This is usually the case in minimum-distance estimation problems that aim to match a large number of impulse
responses or moments of the model and data. See Christiano et al. (2010), Kormilitsina (2011), and DiCecio (2009).
4Although in this article, we focus on LTE based on GMM objective, our results can be easily extended to include
other classical estimation methods where the LTE is commonly applied, for example, extremum estimators that contain
nonparametric plug-in components. See Altonji and Segal (1996), Windmeijer (2005), and Newey and Windmeijer
(2009), among many others.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT