With apologies for the long delay, due in part to regular life, and in part to this subject being a “difficult to get your hands around” kind of subject.
In #5 we examined a statement in the 6th Assessment Report (AR6) and some comments from their main reference, Imbers and co-authors from 2014.
Imbers experimented with a couple of simple models of natural variability and drew some conclusions about attribution studies.
We’ll have a look at their models. I’ll try and explain them in simple terms as well as some technical details.
Autoregressive or AR(1) Model
One model for natural variability they looked at goes by the name of first-order autoregressive or AR(1). In principle it’s pretty simple.
Let’s suppose the temperature tomorrow in London was random. Obviously, it wouldn’t be 1000°C. It wouldn’t be 100°C. There’s a range that you expect.
But if it were random, there would be no correlation between yesterday’s temperature and today’s. Like two spins of a roulette wheel or two dice rolls. The past doesn’t influence the present or the future.
We know from personal experience, and we can also see it in climate records, that the temperature today is correlated with the temperature from yesterday. The same applies for this year and last year.
If the temperature yesterday was 15°C, you expect that today it will be closer to 15°C than to the entire range of temperatures in London for this month for the past 50 years.
Essentially, we know that there is some kind of persistence of temperatures (and other climate variables). Yesterday influences today.
AR(1) is a simple model of random variation but includes persistence. It’s possibly the simplest model of random noise with persistence.
Technical Part on AR(1)
I wrote a bit about this a long time ago in Statistics and Climate – Part Three – Autocorrelation:
The AR(1) model can be written as:
xt+1 – μ = φ(xt – μ) + εt+1
where xt+1 = the next value in the sequence, xt = the last value in the sequence, μ = the mean, εt+1 = random quantity and φ = auto-regression parameter
In non-technical terms, the next value in the series is made up of a random element plus a dependence on the last value – with the strength of this dependence being the parameter φ.
The spectrum of completely random noise (from a “normal distribution”) is “white noise”. But from an AR(1) process it’s “red noise”. Bigger changes occur over longer timescales.
Fractional Differencing or FD Model
Imbers et al say:
There is empirical evidence that the spectrum of global mean temperature is more complex than the spectrum of an AR(1) process..
..We then alternatively assume that the global mean temperature internal variability autocorrelation decays algebraically, allowing for all time scales to be correlated. This long time correlation will clearly have an effect on the statistical significance of the anthropogenic signal
So, the FD model offers another approach to creating a simple model with random variation but where the past influences the present.
Technical Part on FD
The FD model is defined as a stationary stochastic process with zero mean u such that:
ut = (1 - B)-δ εt where But = ut-1
The model is fully specified by the parameters δ and the standard deviation σt of the white noise εt.
The spectrum of an FD model has more energy at longer timescales than the AR(1) model.
Comparison of AR(1) and FD
The easiest way to think of this - suppose today’s temperature is 50% of the day before plus some random element then today will be 25% of two days ago plus random elements, and 12.5% of three days ago, and so on. The dependence decays exponentially. The influence of the past on the present disappears quickly.
You can substitute “today” and “yesterday” with “this year” and “last year”. The idea is the same.
With the FD model, the influence of the past on the present disappears much more slowly.
Back to the point of the paper by Imbers
So, what did they find with their two hypothetical models of internal variability? What were they testing?
Instead of using GCMs they used a simple “1-dimensional” model of the climate, adding their two ideas of internal variability.
They used the known radiative forcing changes from natural changes (solar, volcanic) and from greenhouse gases and sulphates. They tested the correlation of the model output with actual temperature results under a wide variety of parameters for their two internal variability models.
The simple answer is that under these two “plausible” models of internal variability they were able to both detect temperature changes and attribute to GHGs.
Figure 1 shows that for our detection model, the greenhouse gas signal is detected and attributed, the volcanic signal is only detected, and the solar signal is not detected nor attributed for both models of internal variability. In the case of the sulfates forcings, the result depends on the representation of the internal variability.
The results didn’t really depend on plausible values of the parameters chosen.
People who have followed the idea up to here won’t be suprised that the FD, or “long memory model”, performed a little worse than the AR(1) model. That’s because it’s always going to be easier for a model with less persistence over a long time period to look more like random noise. And so an imposed signal will be easier to see.
I’ve tried to summarize a complex paper with lots of stats. The important aspect to me is not the stats, which is why we haven’t delved into this part (plus my interest in stats work is low).
We know with some accuracy over 150 years the radiative forcing from GHGs and other forcings. We know the actual global surface temperature change over this time period.
We can plausibly model the climate as a simple one dimensional model with some values for how the climate responds
We can throw in two models for internal variability - AR(1) and FD
We can assess how well this climate model + natural variability follows the temperature changes over time over the last 150 years
We can see that overall for an AR(1) or an FD process, over a good range of parameters chosen for these models, the attribution statistics seem to work
Is this anything more than an interesting academic exercise?
If you build a result on a lot of assumptions, and don’t keep reinforcing it’s built on a lot of assumptions, you have a house of cards.
We got into this paper because it’s a key reference in AR6 of the IPCC on attribution.
In the next article we’ll dig a little more into internal variability, possibly starting with Huybers and Curry 2006.
Note for commenters - for those who don’t understand radiative physics but think they do and want to yet again derail comment threads with how adding CO2 to the atmosphere doesn’t change the surface temperature.. head over to Digression #3 - The "Greenhouse effect" and add your comments there. Comments here on that point will be deleted.
References
Sensitivity of Climate Change Detection and Attribution to the Characterization of Internal Climate Variability, Jara Imbers et al, Journal of Climate (2014)
Another paper with many similarities:
Interpretation of North Pacific Variability as a Short- and Long-Memory Process, Percival, Overland & Mofjeld, AMS (2001)
More albedo and the Earth cools.
Less albedo and the Earth warms.
No albedo and the Earth becomes much like the Moon, barren, i.e. no water, 400 K lit side, 100 K dark.
Geoengineers know this, why don’t they also admit it violates the frozen ice ball of GHE theory?
MONEY??!!
“TFK_bams09”
Average solar constant of 1,364 W/m^2 arrives at the top of the atmosphere.
Divide by 4 to average this discular area over a spherical area.
(Sphere of r has 4 times the area as a disc of r. This is Fourier’s model which even Pierrehumbert says is no good.)
1,364/4=341
Apply 30% albedo.
341*.7=238.7 (239)
Deduct 78 absorbed in atmosphere.
Net/net of 161 arrives at surface.
Per LoT 1 161 is ALL!! that can leave.
0.9 ground + 17 sensible + 80 latent + 1st 63 LWIR (by remaining difference) and balance is closed!!!!
(1st 63 LWIR is MIA??? Where did it go?? Did TFK palm it like a magic act??)
Where does this extra 396 upwelling come from??
It is the theoretical LWIR from a S-B BB calculation at 16 C, 289 K, that fills the denominator of the emissivity ratio, i.e. 63/396=0.16.
It is not real, it is “extra”, it violates LoT 1.
The 396 upwelling “measurement”/333 “back” cold to warm/a 2nd 63 LWIR GHE loop violates LoT 1 & 2.
Remove the 396/333/63 GHE loop from the graphic and the balance still holds.
Those who claim to measure 400 +/- W/m^2 upwelling from the surface are applying an incorrect emissivity.
This graphic and all of its clones are trash.
The kinetic heat transfer processes of the contiguous atmospheric molecules render a surface BB impossible.
Energy leaving any thermal system = Conduction + Convection + Advection (wind) + Latent (water condensation and evaporation) + Radiation = 100 %
63/(17+80+63) = 0.16
A BB only exists in a vacuum as I demonstrate by experiment.
There is no GHE and no CO2 driven CAGW.
SOD wrote: “the temperature today is correlated with the temperature from yesterday. The same applies for this year and last year.”
IIRC some temperature records are not autocorrelated from year to year. About 15 years ago, there was a big debate about whether the upper tropical troposphere was warming faster that the surface, as models and theory predicted. One publication by a skeptic failed to adjust the confidence intervals for warming rate for autocorrelation in monthly temperature data. Apparently annual data was not autocorrelated. After correction, there were about 2 independent measurements per year, not the 12 expected for uncorrected monthly data.
I’ve also looked into autocorrelation in the rise of sae level and whether it is accelerating. We have monthly sea level data, but that data is so highly autocorrelated that data is independent only every thirty months.
So, in my ignorance, I conclude that the problem of autocorrelation on a yearly time scale varies from data set to data set. In your next post, however, you look at the frequency (power?) spectrum of natural variability, addressing a different aspect of natural variability.