Coronavirus: how to model the economic impacts of a pandemic
Not long after the coronavirus COVID-19 grabbed news headlines, requests started coming to Cambridge Econometrics to model the economic impacts – so we quickly began looking into the possibilities. We subsequently published our initial modelling results.
The OECD had published estimates of impacts of the coronavirus on GDP. But how could we go about modelling this? Is it even possible? And, as always, does the choice of model matter in such an exercise?
Some of the more general economic questions have already been addressed in an excellent post “The economic effects of a pandemic” by Simon Wren Lewis, who has previous experience with the issue. I will try not to repeat these too much but focus instead on issues related to modelling.
Keeping focus on the priorities
We must be absolutely clear that in a pandemic what happens to the economy is only of secondary importance. In all fields of study people naturally focus on what they work on every day – in economics this often comes across as a (misguided) obsession with GDP. But in a pandemic the priority is making sure as many people as possible survive and human suffering is limited as much as possible.
In this context, economic implications play second fiddle but can still be important – if nothing else because they may affect health outcomes. For example, if an infected person is worried about their future income they may skimp on health care, or even try to work thereby infecting more people.
As with any crisis, central banks will need to monitor financial stability and be ready with necessary firepower but, more generally, what happens to aggregate GDP (let alone stock markets) is of secondary importance.
The missing micro
Our diseases and the economy both owe their existence to human interactions. If people didn’t come into close contact, diseases would not be able to survive. We would also have no trade and, hence, no economy. This is why measures to reduce the spread of disease inevitably reduce rates of economic activity, even in the age of the internet.
The need for micro-level interactions points us towards network theory and the economics of complexity. Recently I have been working with the Sugarscape model, which can be used to show both how a primitive economy evolves and how disease propagation can wreck it. However, while a fascinating model, Sugarscape stops literally billions of years before the modern economy formed.
There is no model that can assess simultaneously how disease spreads and how the economy will react.
Dealing with uncertainty
Unfortunately, complexity science also teaches us about the importance of uncertainty. This is why estimates of infection rates have such wide bands.
Or to put another way, if the number of people infected doubles every few days, then the exact day the trend breaks down (which we cannot know) becomes important.
If we want to assess economic impacts, the best we can do is take a set of estimates of infection rates from the official figures. Even then, however, we may be missing important information. For example, health sector employees are likely to be more at risk. Professionals with tele-working options may be less at risk. Risks for teachers will depend entirely on policy responses.
Beware of standard models
It is now widely accepted that the pandemic will be both a supply-side shock and a demand-side shock. Reduced working capacity will limit the capacity to produce but restrictions on freedom and a general feeling of panic will reduce spending and hence demand. Any economic assessment must be able to capture both types of shock to be useful.
I expect we will soon see lots of analyses based on Computable General Equilibrium (CGE) models. These should be ignored.
CGE models cannot assess demand-side shocks and the claim that the global economy is in equilibrium in the middle of a crisis is not credible.
We are already seeing supply-demand imbalances emerging, for example panic-buying of toilet paper and Apple scaling back iPhone production. These could not happen in a CGE model.
New Keynesian Dynamic Stochastic General Equilibrium (DSGE) models like the one used by the OECD are scarcely any better. For one, they lack the sectoral detail that is going to be so important for extracting any meaning from the analysis. They are also pinned to the false mast of equilibrium, essentially assuming that we just need some time for everything to get back to normal. They were found to be near-useless in the 2008-09 financial crisis (an area supposedly well within their expertise) so do not expect anything useful from them now.
Can Cambridge Econometrics do better?
The framework offered by the E3ME macroeconomic model addresses some of the criticisms above. It has a detailed sectoral disaggregation and accepts that a severe crisis will put the economy on to a different path (for example higher rates of long-term unemployment may be an outcome).
Unlike most standard models, it doesn’t make unrealistic assumptions about stimulus policies being ineffective.
There remain serious limitations, however. The range of uncertainty in the inputs (e.g. infection rates, fatality rates) will not go away. The economic restructuring that we are likely to see (e.g. a long-term shift to video-conferencing) is also not possible to predict.
So, is it worth modelling at all?
This points to a deeper question of what we want from macroeconomic models. In this case, estimates of GDP impacts are useless – they are based on things we cannot know and anyway don’t help policy makers plan. The models instead should be used to help understand the economic processes involved. For example, if the primary effects are through collapsing investment or loss of trade, then construction and manufacturing companies will need to adjust first.
Providing insights like these is, of course, what economists are meant to do. If based on realistic assumptions, models can be a tool to aid understanding and to assist with future planning. In the future, network-based models might be able to provide a richer set of analyses. For now we will have to make do with the available tools, but the key insights will come from understanding how the economy will evolve, not the overall cost in terms of GDP.
Follow me on Twitter and LinkedIn and please feel free to email me at the address below.
I read with interest and notably this is the first article that came up on my Google search regarding the economic modelling of the impacts of the COVID crisis. I think it’s about time there was more serious debate about the types of models used for the purposes of economic policy making.
I’d disagree with your point on CGE models. It’s mathematically and theoretically possible to drive a wedge between supply and demand and create market disequilibrium – you may want to look at the work of Barro and Grossman in this space. I think that your point is more that many general equilibrium modellers don’t really address this issue head on and tend to assume it away. So it is not the case that CGE models “cannot” do this, it’s just an issue that is too often sadly ignored.
Either way it would be interesting to see how you think a New Keynesian perspective surplants say for example the work of Malinvaud or even Richard Layard’ modelling approaches.
I agree with your point around the relative usefulness of DSGE models in this context. You need detailed sectoral analysis at a time like this – particularly as Government policy at the moment is seemingly operating on a sector by sector basis. However, I was working in a major financial institution back in 2008, but what I saw was that DSGE models were predicting the effects of the financial crisis quite well. However, the modellers were so frightened of the results that they actually forceably changed them back to being less severe as they did not fully understand them and were actually scared of the consequences. The model gave realistic predictions, but at that time economists were so used to relative economic stability they thought the results to be implausible. I found this particularly interesting given their neo-classical design and reliance on historical data can leave them more exposed to the Lucas critique. Would be interesting to understand more about how your modelling techniques deal with this challenge (maybe another blog to think about?).
So it’s worth taking account of the fact that it’s often the modellers and the institutions they work in that have the problem, not just the theoretical structure of the models themselves.
The lessons learned from the financial crisis are now hopefully being applied this time round – only time will tell.
But forgive my ramblings – well done for getting such an interesting blog into the market and stimulating the debate.
Hi – thanks for the fascinating comments!
On CGE models, yes I agree. At Cambridge Econometrics we often partner with another group that uses the GEM-E3-FIT model, which has built in many constraints and frictions. Policy makers need these aspects, especially relating to employment, and I would urge other CGE modellers to bring in as much as they can.
Your remarks on the institutional setting are especially super-interesting and not something I had heard before. I can tell you we had a similar debate internally at Cambridge Econometrics in late 2008 about how much different our projections could be to everyone else’s. I remember an econometric analysis later showed that the consensus forecast was driving individual forecasts as much as the other way round!
Uncertainty and non-linearity comes on to the Lucas Critique and questions of time invariance. It is an important issue that requires careful thought and often the best we can do is consider on a case-by-case basis. My personal studies on integrating complexity better is also aimed at addressing some of the limitations of the econometrics. Always happy to write more on that…