DICE: A fundamentally flawed model
William Nordhaus won the Nobel Memorial Prize in Economics for a model designed to assess the economic consequences of climate change. But it is fundamentally unreliable, say leading economists.
So-called integrated assessment models (IAMs) have become central tools in climate research for estimating the economic impacts of global warming and climate policies. William Nordhaus, a pioneer in this field, was awarded the Nobel Memorial Prize in Economic Sciences in 2018 for his DICE model, which integrates economic and climatic factors. The model aims to provide a picture, often expressed in terms of GDP loss, of how climate change could affect the global economy.
A conversation with:
Bob Ward is Policy and Communications Directo at the Grantham Research Institute on Climate Change and the Environment, London School of Economics (LSE).
Dimitri Zenghelis is Special Advisor to the Wealth Economy project, which he also co-founded, centred at Cambridge University.
Norsk versjon: En forkortet utgave av dette intervjuet finner du her.
However, leading economists like Nicholas Stern and Joseph Stiglitz have warned against taking these models at face value. They argue that IAMs, particularly those based on DICE, often underestimate risk, overlook critical factors, and fail to provide reliable answers.
But if we can’t trust them, why are they still used? And what exactly went wrong with the DICE model in the first place? We spoke with two researchers who know these models and have been warning about their limitations for years: Bob Ward, Policy and Communications Director at the London School of Economics (LSE), and Dimitri Zenghelis, Senior Research Fellow at the Cambridge Institute for Sustainability Leadership.
<2°C: – What was the rationale behind developing these integrated assessment models (IAMs)? Not just Nordhaus’ DICE model, but what prompted the development of IAMs in the first place?
Bob Ward: – William Nordhaus was the first mainstream economist to take seriously the idea that the environment could have major implications for the economy. The DICE model emerged from a desire to develop models that could assess environmental impacts on the economy. But the assumption was that environmental impacts would be marginal, and that was a mistake.
Dimitri Zenghelis: – There was a recognition that the problem of trying to limit climate change risks was one that transcended a bunch of disciplines. You needed something to tie in the findings from climate science into economics, spatial geography and so on. Which required a broader model than what were already available.
I think that recognition led to a sort of misplaced desire to try and capture as many of these important elements under one model. It integrated outcomes associated with climate damages with outcomes associated with macroeconomic commitment of resources to investment. As Bob said, it was very quickly recognised that some of them were flawed.
<2°C: – How were they flawed? Do you have an example?
Zenghelis: – For instance, that the structure of the economy is fixed in the model. And these models apply over long periods of time that not only necessarily involve structural change, but they also require structural change as part of their objective function. Which is subject to quite substantial uncertainty. It quickly became apparent that there was a paradox:
The model is looking to tell you what the optimal mix of policies would be to bring about outcome Y at cost X in, let’s say, 2050. But you are pre-assuming what technologies will cost, what climate impacts and what consumer tastes and preferences will be like, and that you know what the whole system of institutions and governance around the global economy will be in 2050. Well, if you already know the answer to those questions, there’s very little left for the model to solve.
That removal of memory and of path dependence is a critical flaw in models such as DICE. They take no account of stocks of assets. In other words, they take no account of history. DICE essentially assumes that climate damages do their thing in affecting GDP relative to baseline in any one year, and then the slate is wiped clean for the next year. In fact, climate and resource depletion damages mount and mount more quickly as productivity growth is eroded through endogenous effects of devalued and destroyed capital. The risk of catastrophe would be disruptive not just to one year’s output, it would destroy assets and undermine the growth and development capacity of the future. Including the knowledge spillovers associated with the accumulation of capital.
<2°C: –And these models can’t capture that. So what are they used for?
Ward: –The old adage “all models are wrong, but some are useful” applies here. Models like DICE have the benefit that they’re simple. You can largely see how changing some of the parameters affects the output, so they give some insights. What you can’t do, is believe in the outputs. Because so much important information is left out.
When you start looking at the details, the damage functions aren’t believable because they leave out a lot of the damage. The discount rate, if it’s a social discount rate, reflects the prejudices of the modellers. They assume future generations don’t matter as much as today’s generation. And secondly, they basically assume that you’ll be economically better off in the future, no matter how much damage there is, which is not true. There are more sophisticated versions of models than DICE whose producers say deliver better outputs, but they become black boxes, where you can’t really see what’s happening.
Zenghelis: –There are pros and cons to the DICE model. As Bob said, they are valuable for their insights, not the numbers they produce. Once that is recognised, then suddenly a world of potential opportunity arises for using models appropriately to inform policy maker judgement.
<2°C: –How?
Zenghelis: –Let’s take a step back: Why do you want models? Well, you want models because they allow you to frame relationships which are complex, which you can’t do in your head. They impose a discipline on the modeller to do it transparently, in a way so that others in principle can come in and question the assumptions. And an important pro with the DICE model is that it is simple and transparent. This means users can play around with assumptions and parameter settings using a consistent framework to fruitfully investigate the impact on projected outcomes.
The ability to do this paradoxically sheds light on some of the model’s implicit shortcomings. Key among these is the fact that DICE is a flow model, whereas climate damages would be expected to have stock and growth effects. Because history matters. The path you set out upon and the impact that path delivers, has an impact on the projection going forward. That can’t be captured in a single equilibrium optimization model.
<2°C: –There’s also the issue that the climate impacts on the economy, according to these models, are often infeasibly small …
Ward: –The reason they’re infeasibly small is because the modellers tend to say anything that’s too complicated or unknown, we just leave out. We treat it as if it’s zero. That applies to all the tail risks of climate change. Like the climate tipping points that can trigger displacement of huge populations, conflict and war. All of that is left out in the DICE model, which is supposed to help us weigh the cost of action versus inaction.
Zenghelis: –There are similar problems on the mitigation cost side. Most of the projections on key renewable technologies in IAMs have in many cases been systematically overestimated. In the real world, the cost of technology is a function of deployment: Learning by doing, economies of scale and network effects means the more you deploy it, the faster the costs come down.
But if they tried to replicate the real world in that respect, their models would become deeply unstable. It would generate multiple equilibria, and some of the models just won’t converge. They rely on one unique equilibrium. But the real world isn’t like that.
There is another reason too: They often apply these backstop costs below which things like the cost of renewable technologies cannot fall.
<2°C: –Why would they do that?
Zenghelis: –Because if forecasters had actually inputted in the past anything like the kinds of cost profiles that we have seen, they would have been laughed out of court. So, they actively imposed restrictions on their models to prevent them from predicting the kinds of technology pathways we have seen in the real world. Which in my opinion, gives you an indication of the potential limitations of the value of these models. When you crank the handle, the result is already predetermined through the user’s assumptions and parameter settings. The results tell you a lot about the prejudice of the modeller, but very little you about how the world might turn out.
<2°C: –This debate on the DICE model started over a decade before Nordhaus received the Nobel memorial prize in 2017. Where does the debate stand today? And what is Nordhaus and his supporters’ stance today?
Ward: – Nordhaus’ early work, where he told mainstream economists that the environment plays an important role and must be integrated into macroeconomic analysis, was a key insight. It’s something we should all be grateful for because the economics profession has been slow to respond to that early message. However, his increasing reliance on his model and people using it as if it captures the economic consequences of environmental problems has been damaging.
His response has been to tweak parameters in his model, gradually moving in a direction that might make it more believable. But the structural aspects of the model, like treating economic growth as an exogenous factor, will never be appropriate. You can’t get a model like Nordhaus’s to give you believable results, no matter how sophisticated it is, because it’s structurally unsound.
<2°C: –Explain.
Zenghelis: – Part of the problem is, as previously mentioned, that DICE takes no account of history. Any account of structural change that doesn’t integrate endogenous growth is fundamentally flawed from the outset. It doesn’t fully encompass risk and uncertainty, which are inherent in climate impacts. When you start imputing zero for low probability catastrophic changes—outliers that should drive decision-making—you diminish the value of these models. The one-in-100 chance of annihilation should be a key motivator for climate action, and that should be reflected in the models. But once you start omitting these, the models’ value becomes smaller and smaller.
The problem arises from the desire to have a model that gives you accurate predictions. But risk, uncertainty, and endogeneity make predictions nearly impossible, except by chance. Predictions will always be subject to significant error due to the path-dependent nature of these processes. Small errors in early data points lead to dramatically different outcomes because they’re path-dependent, with reinforcing feedbacks throughout the system.
<2°C: – Feedbacks? Like what?
Zenghelis: – We’ve seen this in the relationship between deployment of renewable technology, which brings down costs, and lower costs, which induce greater deployment. On the climate impact side, there are similar reinforcing feedbacks, like tipping points where a bit of global warming causes physical changes that lead to more global warming. Society’s ability to respond and adapt to these changes is also path-dependent. The more battered society becomes, the less able it is to tackle the impacts of climate change, leading to disorder and conflict.
All these non-linear, path-dependent changes mean that even the best model will get its predictions wrong if there are unexpected changes. Rather than trying to find a model that tells us what the world will look like, we should be thinking in terms of risk, opportunity, flexibility, and adaptability. Policymakers need tools that help them understand the range of possible outcomes and the trade-offs involved, keeping options open for different future paths.
<2°C: – So how do we respond when people rely heavily on predictions based on Nordhaus’ work? Because if you object, for instance by referring to a more recent UNEP report that argues even a three-degree warming could have catastrophic effects and displace a billion people, they just say, “well first, Nordhaus won the Nobel prize for this, and second, this is quoted by the IPCC, so your quarrel is really with them”.
Ward: – That’s complete nonsense. People quoting older papers and reports are choosing convenience over accuracy, ignoring the most up-to-date science. In this case, they’re obviously ignoring the 6th Assessment Report, the most recent IPCC report, which does not rely on Nordhaus’s model outputs because there’s now a recognition that his model doesn’t produce sensible numbers.
Zenghelis: – It’s an appeal to authority rather than engaging with the broader expert consensus. They could just as well quote other Nobel laureates like Paul Romer or Paul Krugman, who have different views. There are many respected economists who have made compelling arguments against the kind of modelling used by Nordhaus. And it’s important to note that even within the economic modelling community, there’s an understanding that these models have embedded biases that can lead to systematically incorrect conclusions.
Ward: – The concept of “optimal warming” itself is problematic because it implies that society could adapt perfectly to a different climate with evenly distributed costs and benefits. But in reality, society has adapted to the climate we’ve known over the past 12,000 years while civilisation has developed, and the costs of moving away from that are likely to be far greater. The impact of that is completely under-recognized by these models.
Zenghelis: – It’s not just that you’re out of the bounds of historical experience; it’s also the speed with which we are changing global temperatures. These kinds of temperature changes have happened naturally in the past, but they took tens, if not hundreds, of thousands of years. We’re doing it in a couple of hundred years. If your models aren’t accounting for this kind of obvious uncertainty related to climate science, they’re not addressing the real issues that matter.
Ward: – So if you have a model that systematically and dramatically underestimates the cost of climate change and systematically and dramatically overestimates the cost of changing our current system, then you don’t have to be a genius or a model expert to understand that the concept of «optimal» is flawed. It attempts to balance the costs of inaction against the benefits, but with everything systematically biased in the wrong way, you’re going to get the wrong answer.
<2°C: – I realise this is a leading question, but, in conclusion, should we make policy decisions based on what Nordhaus’ model predicts as the optimal warming level?
Ward: – Absolutely not.
Zenghelis: – You should never rely on the output of any single model. If you want to make sensible policy decisions, you need to use the insights from a range of models, but also from other approaches. Lessons from history, geography, social psychology, and more should inform policy. We’ve seen that expectations play a huge role—if people expect the cost of clean technologies to fall, they’ll act in ways that make them fall, due to increased deployment. So, modelling expectations, strategic behaviour, and the formation of social norms is crucial.