In #1 we looked at natural variability - how the climate changes over decades and centuries before we started burning fossil fuels in large quantities. So clearly many past trends were not caused by burning fossil fuels. We need some method to attribute (or not) a recent trend to human activity. This is where climate models come in.
In #3 we looked at an example of a climate model producing the right value of 20th century temperature trends for the wrong reason.
The Art and Science of Climate Model Tuning is an excellent paper by Frederic Hourdin and a number of co-authors. It got a brief mention on the old blog in Models, On – and Off – the Catwalk – Part Six – Tuning and Seasonal Contrasts. One of the co-authors is Thorsten Mauritsen who was the lead author of Tuning the Climate of a Global Model, looked at in another old article, and another co-author is Jean-Christophe Golaz, lead author of the paper we looked at in #3.
They explain that there are lots of choices to make when building a model:
Each parameterization relies on a set of internal equations and often depends on parameters, the values of which are often poorly constrained by observations. The process of estimating these uncertain parameters in order to reduce the mismatch between specific observations and model results is usually referred to as tuning in the climate modeling community.
Anyone who has dealt with mathematical modeling understands this - some parameters are unknown, or they might have a broad range of plausible values
An interesting comment:
There may also be some concern that explaining that models are tuned may strengthen the arguments of those claiming to question the validity of climate change projections. Tuning may be seen indeed as an unspeakable way to compensate for model errors.
The authors are advocating for more transparency on this topic:
It is, however, important that groups better communicate their tuning strategy. In particular, when comparing models on a given metric, either for model assessment or for understanding of climate mechanisms, it is essential to know whether some models used this metric as tuning target.
Here’s an example from the paper. We’ll focus on the first graph. It might look a bit confusing, but I’ll try and explain what it’s showing:
There are four models (the four colors). We can see that they use very different values of the same parameter (the bottom axis) and yet all get the same top of atmosphere radiation values (the left axis). If instead they all use the same values of the parameter they get wildly different radiation values.
It’s a simple example of the problem. No one knows the “right value” of this parameter. But all of the models produce roughly the correct climate despite using different values because other incorrect parameters in each model cancel out the error.
Although tuning is an efficient way to reduce the distance between model and selected observations, it can also risk masking fundamental problems and the need for model improvements.
I’ve included a few extracts from the paper in the Notes below.
An interesting example that we might look into later is the recent generation of models that participated in CMIP6, the intercomparison project for the IPCC 6th assessment report. Overall the models added complexity - “more realism” - and yet a portion of them produce projections of future warming that are considered much too warm.
They got “better” in principle, yet that made them worse.
This isn’t a surprise if you’ve grasped the main idea from this article. More complexity makes selecting the right parameters even more difficult.
Climate modeling is a difficult subject. Taking the output of a climate model at face value isn’t a good option.
Notes
A few extracts from the paper, with references removed for clarity:
Climate model development is founded on well- understood physics combined with a number of heuristic process representations. The fluid motions in the atmosphere and ocean are resolved by the so- called dynamical core down to a grid spacing of typically 25–300 km for global models, based on numerical formulations of the equations of motion from fluid mechanics. Subgrid-scale turbulent and convective motions must be represented through approximate subgrid-scale parameterizations. These subgrid-scale parameterizations include coupling with thermodynamics; radiation; continental hydrology; and, optionally, chemistry, aerosol microphysics, or biology.
Parameterizations are often based on a mixed, physical, phenomenological and statistical view. For example, the cloud fraction needed to represent the mean effect of a field of clouds on radiation may be related to the resolved humidity and temperature through an empirical relationship. But the same cloud fraction can also be obtained from a more elaborate description of processes governing cloud formation and evolution. For instance, for an ensemble of cumulus clouds within a horizontal grid cell, clouds can be represented with a single-mean plume of warm and moist air rising from the surface or with an ensemble of such plumes.
Similar parameterizations are needed for many components not amenable to first-principle approaches at the grid scale of a global model, including boundary layers, surface hydrology, and ecosystem dynamics. Each parameterization, in turn, typically depends on one or more parameters whose numerical values are poorly constrained by first principles or observations at the grid scale of global models. Being approximate descriptions of unresolved processes, there exist different possibilities for the representation of many processes
..
There is evidence that a number of model errors are structural in nature and arise specifically from the approximations in key parameterizations as well as their interactions. For example, some models systematically underestimate rainfall over monsoon regions, whereas others will do the opposite. Other biases are systematic across models, like the presence of a persistent double Pacific intertropical convergence zone (ITCZ) on both sides of the equator or warm biases over the eastern tropical oceans. Those model biases are indeed often resistant to model tuning. Tuning a model to improve its performance on a specific target also often degrades performance on other metrics. For example, tuning a model to improve the intraseasonal variability of precipitation in the tropics often comes at the cost of increased biases in the mean state.
Introduction of a new parameterization or improvement also often decreases the model skill on certain measures. The preexisting version of a model is generally optimized by both tuning uncertain parameters and selecting model combinations giving acceptable results, probably inducing compensation errors (over- tuning). Improving one part of the model may then make the skill relative to observations worse, even though it has a better formulation. The stronger the previous tuning, the more difficult it will be to demonstrate a positive impact from the model improvement and to obtain an acceptable retuning. In that sense, tuning (in case of overtuning) may even slow down the process of model improvement by preventing the incorporation of new and original ideas.
...
Some modeling groups claim not to tune their models against twentieth- century warming; however, even for model developers, it is difficult to ensure that this is absolutely true in practice because of the complexity and historical dimension of model development.
..
The fact that some models are explicitly, or implicitly, tuned to better match the twentieth-century warming, while others may not be, clearly complicates the interpretation of the results of combined model ensembles such as CMIP. The diversity of approaches is unavoidable as individual modeling centers pursue their model development to seek their specific scientific goals. It is, however, essential that decisions affecting forcing or feedback made during model development be transparently documented.
References
Art and Science of Climate Model Tuning, Frederic Hourdin et al, BAMS (2017)
So, I take it that a closed form climate change model where one can plug in the current values of known parameters is not yet possible and maybe is inherently not ever possible? Is that another way of saying the climate system is chaotic? Or is it a case where orders of magnitude faster computers are needed?
Yes, CFD experts have known from the time climate modeling was an idea in James Hansen's convoluted mind that the results would have very large numerical and subgrid model errors. In reality the results will only be skillful on those outputs used in tuning or closely related to them. Recently, a few modelers have come clean including Palmer and Stevens who are proposing massive increases in resolution.