Natural Variability, Attribution and Climate Models #3
An Example of Problems in Climate Models
In #1 we took a brief look at Natural Variation - climate varies from decade to decade, century to century. In #2 we took a brief look at attribution from “simple” models and from climate models (GCMs).
Here’s an example of the problem of “what do we make of climate models?”
I wrote about it on the original blog - Opinions and Perspectives – 6 – Climate Models, Consensus Myths and Fudge Factors. I noticed the paper I used in that article came up in Hourdin et al 2017, which in turn was referenced from the most recent IPCC report, AR6.
So this is the idea from the paper by Golaz and co-authors in 2013.
They ran a climate model over the 20th century - this is a standard thing to do to test a climate model on lots of different metrics. How well does the model reproduce our observations of trends?
In this case it was temperature change from 1900 to present.
In one version of the model they used a parameter value (related to aerosols and clouds) that is traditional but wrong, in another version they used the best value based on recent studies, and they added another alternate value.
What happens?
The black and gray lines are observations of temperature change over 100+ years. The green line is the climate model simulation using the traditional but probably wrong value, the blue line is using the “best value” and the red line is the “worst value”:
We see that while temperatures have risen about 0.8°C from the late 1800s to 2000, the model with the correct value (blue line) of this parameter produces only about 0.2°C from 1860 to 2000. The red line, with the worst value, is very close.
CM3w predicts the most realistic 20th century warming. However, this is achieved with a small and less desirable threshold radius of 6.0 um for the onset of precipitation. Conversely, CM3c uses a more desirable value of 10.6 um but produces a very unrealistic 20th century temperature evolution. This might indicate the presence of compensating model errors.
The paper notes that the present day climate produced in the different versions of these models is very similar.
References
Cloud tuning in a coupled climate model: Impact on 20th century warming, Jean-Christophe Golaz, Larry W. Horowitz & Hiram Levy II, GRL 2013
Parameters are best guess ranges, not exactly robust science, not reality of what's happening. When the models use best guesses, it's not an attribution study, it's simply adjusting the algorithms to what they want to see or expect to see. Then they attribute the "results" to the forcings and feedbacks of their best guess ranges. Maybe that's why the models aren't very successful.
Simply simulation games that reside more in sci-fi than reality. No models projected the 20 year hiatus, we're also into a 9 year cooling trend. The models aren't designed with reality in mind.
I had a post at Climate Etc. in December of last year going through the CFD literature and explaining why this lack of skill is implied by the literature.