Although economists could be forgiven for failing to predict unique, so-called ‘Black Swan’ events, such as the recent Global Financial Crisis, why are so many routine forecasts also inaccurate?
A forecast is usually communicated to the public as a confidence interval - ‘we expect growth to lie between 2.5-3.2%, 90% of the time’. However, the newspaper headline may read: “Annual Growth Forecast to be 2.7%”, wiping the acknowledgement of uncertainty from existence - a subtle but dangerous flaw. Presented as fact, the slightest deviation could confound expectations.
So does the fault lie in communication? Not entirely. Economists, who should understand the limitations of their forecasts, can be found guilty of over-confidence in their predictions too. Over a period of 18 years the Survey of Professional Forecasters found that economic forecasts for annual GDP growth were wrong for one third of the time, for a 90% confidence interval – they got it wrong around 20% more than they expected.
When both ignorance of uncertainty and a substantial overconfidence contribute to the poor reputation of economic forecasting, where can we look for inspiration?
Weather forecasting has been around since the Babylonians 6,000 years ago, but currently 8% of Met Office next day temperature forecasts are still out by more than 2°C. With all the resources, data and compelling motivation, should the forecasts not be more accurate?
In fact, over the past 25 years alone, weather forecasting has become 350% more accurate, despite the chaos that is the weather. The year 1972 saw Edward Lorenz pioneer ‘Chaos Theory’ when his weather model started churning out inconsistent results – a forecast of clear skies in one simulation turned out to be a storm in the next. His theory, that a small change in inputs to a dynamic and nonlinear system, such as the weather, could have extraordinary consequences, is where the ‘butterfly effect’ arose.
Despite the challenge, meteorologists have tamed the issue in a fairly short amount of time through the aggregation of predictions, the continual improvement of weather models and the use of exponentially increasing computational capacity.
The first lesson economists could take from meteorologists is that of aggregating predictions from different models to create one forecast. When the Met Office says ‘the likelihood of rain is 70%’, they really mean ‘in 7/10 of our models there was rain’. By employing this technique, economists could incorporate a quantitative form of uncertainty back into their forecasts, thus reducing the tendency for confidence in a prediction to be overstated.
In fact, this technique would be so useful for the same reasons as in meteorology – the economy is subject to chaos too. Simple models, such as the Solow-Swan growth model, suggest that the chaos is potentiated by the underlying characteristics of an economy, non-linearity and dynamism. Similar to the weather, large deviations in economic outcomes can be generated from small differences in the inputs. The Bank of England is currently trying to solve this very problem. Their recent uses of Monte Carlo methods – randomly altering a model’s input values to account for measurement uncertainty and iterating its predictions over and over – echo the techniques applied in meteorology, as thousands of variant outcomes are aggregated into best-case, worst-case and most-likely scenarios.
So what else can economists learn from the progression of weather forecasts? The project manager of the Good Judgement Project makes a very simple suggestion – keep score!
The Met Office has to make weather predictions hourly for weeks in advance and update these continually as new information appears. These forecasts are then calibrated against reality. Meteorologists have a real incentive to ask, “why was our forecast wrong?”, and improve their models in light of an onslaught of emails arriving from the public. Economists could benefit from the very same incentive.
Parallels can further be drawn from weather forecasting to provide hope for economists. One critical development in meteorology has been the proliferation of computing power and its use in forecasting. The IBM Bluefire supercomputer at the National Centre for Atmospheric Research (U.S.A.) can make 77 trillion calculations per second, which is integral for dealing with the chaos and complexity of weather models. Economics has its own catalyst: the advent of Big Data. Although an embryonic discipline, its techniques enable the analysis of very large and complex data sets, for which traditional data processing methods were found to be inadequate. Its sub-field, Data Science, which performs predictive analysis on data, has already shown promise in forecasting behaviour.
A New Dawn?
In contrast to meteorology, Economics is a comparatively new discipline with looser causal associations and a limited understanding of its related science. Economists therefore must be careful not to over-rely on naïve data-driven models and instead combine them with empirically proven concepts.
However, Big Data – as with meteorology - could form a path to more accurate and reliable forecasts. The development of data-supported models that integrate concepts of causation with correlations found in data will undoubtedly have a role to play in the future of forecasting. This is not to say economists should not still be reminded of their limitations. Cases such as the Subprime Mortgage Crisis, that defy a model’s assumptions, will render their use obsolete. However, learning is the defining factor of progress, keeping score of mistakes will facilitate the process.
Only through the acknowledgement of uncertainty and effective use of techniques can a new dawn truly begin.
Original illustration by Jack Roalfe