Benchmarking Judgmentally Adjusted Forecasts
Author | Philip Hans Franses,Bert Bruijn |
Published date | 01 January 2017 |
Date | 01 January 2017 |
DOI | http://doi.org/10.1002/ijfe.1569 |
BENCHMARKING JUDGMENTALLY ADJUSTED FORECASTS
PHILIP HANS FRANSES*
,†
and BERT DE BRUIJN
Econometric Institute, Erasmus School of Economics, Rotterdam, The Netherlands
ABSTRACT
Many publicly available macroeconomic forecasts are judgmentally adjusted model-based forecasts. In practice, usually only a
single final forecast is available, and not the underlying econometric model, nor are the size and reason for adjustment known.
Hence, the relative weights given to the model forecasts and to the judgement are usually unknown to the analyst.
This paper proposes a methodology to evaluate the quality of such final forecasts, also to allow learning from past errors. To
do so, the analyst needs benchmark forecasts. We propose two such benchmarks. The first is the simple no-change forecast,
which is the bottom line forecast that an expert should be able to improve. The second benchmark is an estimated
model-based forecast, which is found as the best forecast given the realizations and the final forecasts. We illustrate this
methodology for two sets of GDP growth forecasts, one for the USA and one for the Netherlands. These applications tell
us that adjustment appears most effective in periods of first recovery from a recession. Copyright © 2016 John Wiley &
Sons, Ltd.
Received 03 June 2016; Accepted 13 September 2016
JEL CODE: C20; C51
KEY WORDS: forecast decomposition; expert adjustment; total least squares
1. INTRODUCTION
Many publicly available macroeconomic forecasts are judgmentally adjusted model-based forecasts. Econometric
models can be multiple-equation systems with hundreds of variables or identities, or Bayesian vector
autoregressions or even simple extrapolation tools. An illustration of the first is given by Franses, Kranendonk
and Lanser (2011), where all the forecasts from the large macroeconomic model of the Netherlands Bureau for
Economic Policy Analysis (CPB) are manually adjusted by experts with domain-specific knowledge.
In many situations, it can be beneficial to adjust model-based forecasts. When experts foresee that a prediction
error is to be made with the model, then adjustment can help to improve accuracy. For example, adjustment can be
needed because of measurement issues in the explanatory variables at the forecast origin or because of anticipated
changes, not included in the model at the forecast origin.
Despite the potential success of expert adjustment, it is rarely documented what an expert does and why certain
decisions have been made. This hampers a straightforward evaluation of forecast errors, as it is usually unknown
which part of the error could be due to the econometric model and which part to the manual adjustment. In other
words, the relative weights given to the econometric model forecasts and to the judgement are usually unknown to
the analyst.
In this paper, we propose a methodology that allows to study the merits of the relative contribution of an expert.
In fact, our methodology allows to indicate when, that is, for which years or quarters, did the expert make the final
forecast better than an underlying model forecast and when did the expert touch harm that forecast quality? For this
*Correspondence to: Philip Hans Franses, Econometric Institute, Erasmus School of Economics, Rotterdam, The Netherlands.
†
E-mail: franses@ese.eur.nl
Thanks are due to Richard Paap, Christiaan Heij and Tom Wansbeek for various helpful comments.
Copyright © 2016 John Wiley & Sons, Ltd.
International Journal of Finance & Economics
Int. J. Fin. Econ. 22:3–11 (2017)
Published online 26 October 2016 in Wiley Online Library
(wileyonlinelibrary.com). DOI: 10.1002/ijfe.1569
To continue reading
Request your trial