Volume 19, Issue 4 October 2023

An Interpretable Machine Learning Workflow with an Application to Economic Forecasting

Abstract

We propose a generic workflow for the use of machine learning models to inform decision-making and to communicate modeling results with stakeholders. It involves three steps: (i) a comparative model evaluation, (ii) a feature importance analysis, and (iii) statistical inference based on Shapley value decompositions. We discuss the different steps of the workflow in detail and demonstrate each by forecasting changes in U.S. unemployment one year ahead using the well-established FRED-MD data set. We find that universal function approximators from the machine learning literature, including gradient boosting and artificial neural networks, outperform more conventional linear models. This better performance is associated with greater flexibility, allowing the machine learning models to account for time-varying and non-linear relationships in the data-generating process. The Shapley value decomposition identifies economically meaningful non-linearities learned by the models. Shapley regressions for statistical inference on machine learning models enable us to assess and communicate variable importance akin to conventional econometric approaches. While we also explore high-dimensional models, our findings suggest that the best trade-off between interpretability and performance of the models is achieved when a small set of variables is selected by domain experts.

Authors

  • Marcus Buckmann
  • Andreas Joseph

JEL codes

  • C14
  • C45
  • C53
  • E27

Other papers in this issue

Frédérique Bec and Raouf Boucekkine and Caroline Jardet

Katharina Plessen-Mátyás and Christoph Kaufmann and Julian von Landesberger

André Teixeira and Zoë Venter

Thomas B. King and Travis D. Nesmith and Anna Paulson and Todd Prono

Md Jahir Uddin Palas and Fernando Moreira

Raphael Auer and Giulio Cornelli and Jon Frost