5 Comments

SOD wrote: “the temperature today is correlated with the temperature from yesterday. The same applies for this year and last year.”

IIRC some temperature records are not autocorrelated from year to year. About 15 years ago, there was a big debate about whether the upper tropical troposphere was warming faster that the surface, as models and theory predicted. One publication by a skeptic failed to adjust the confidence intervals for warming rate for autocorrelation in monthly temperature data. Apparently annual data was not autocorrelated. After correction, there were about 2 independent measurements per year, not the 12 expected for uncorrected monthly data.

I’ve also looked into autocorrelation in the rise of sae level and whether it is accelerating. We have monthly sea level data, but that data is so highly autocorrelated that data is independent only every thirty months.

So, in my ignorance, I conclude that the problem of autocorrelation on a yearly time scale varies from data set to data set. In your next post, however, you look at the frequency (power?) spectrum of natural variability, addressing a different aspect of natural variability.

Expand full comment

Frank:

"IIRC some temperature records are not autocorrelated from year to year"

Definitely. In general, there is some persistence. In the next article - https://scienceofdoom.substack.com/p/natural-variability-attribution-and-665 - we look at Huybers & Curry who write:

"At interannual and decadal timescales, surface temperatures exhibit a strong land–sea contrast, with b averaging one over the oceans and zero over the continents."

Where b is the power law, P(f) ∝ f^(-b). So some places show little or no persistence (where b = 0). It all depends on the timescales and the region.

Expand full comment

Thanks for the reply. I have read the next article. Both are excellent, but I’m having difficulty, perhaps differentiating autocorrelation from persistence or perhaps “long-term persistence”.

When I think about what you’ve exposed me to is the spectrum of noise found in climate data. In GMST data, the main source of noise (I’m aware of) is ENSO which lasts about 6+ months. Therefore the noise in monthly data is not randomly distributed and we need to correct for autocorrelation. If the major source of noise occurred on a centennial time scale (red noise), a least-squares fit will produce a warming rate with a confidence interval that is misleadingly narrow. So maybe I understand the extreme cases. The implications of white noise aren’t clear. This is a tough subject for me and I’m relying more on my intuition than the math.

Expand full comment

Frank,

Persistence just means "some level of the past affecting the future".

Autocorrelation is the correlation (between -1.0 to +1.0) of a current value and a past value. If it's zero then there is no persistence.

In the simplest AR(1) model the autocorrelation decays exponentially because if last year's value is 0.5 of this years value, then the autocorrelation to 2 years before is 0.25, to 3 years before is 0.125 and so on.

Other models have different persistence. In the FD model the autocorrelation decays less.

We can express the relationships instead as a power law.

I find the power law to be more meaningful. And it seems that it isn't some constant, it varies over timescales, and also is different for different terrains.

It will also be different for rainfall vs temperature.

There are equations relating power law and different autocorrelations but I'm not on top of those. I hope to bring the ideas together at some stage.

And of course, if someone is looking for a "naive" statistical significance - "how likely is it that this set of values occured by chance?", where "naive" = assuming random uncorrelated noise for internal variability, they are going to be way off base with data that is autocorrelated (i.e., persistent).

I like the simple quote from Cohn & Lins (2005):

"From a practical standpoint, however, it may be preferable to acknowledge that the concept of statistical significance is meaningless when discussing poorly understood systems."

Expand full comment

More albedo and the Earth cools.

Less albedo and the Earth warms.

No albedo and the Earth becomes much like the Moon, barren, i.e. no water, 400 K lit side, 100 K dark.

Geoengineers know this, why don’t they also admit it violates the frozen ice ball of GHE theory?

MONEY??!!

“TFK_bams09”

Average solar constant of 1,364 W/m^2 arrives at the top of the atmosphere.

Divide by 4 to average this discular area over a spherical area.

(Sphere of r has 4 times the area as a disc of r. This is Fourier’s model which even Pierrehumbert says is no good.)

1,364/4=341

Apply 30% albedo.

341*.7=238.7 (239)

Deduct 78 absorbed in atmosphere.

Net/net of 161 arrives at surface.

Per LoT 1 161 is ALL!! that can leave.

0.9 ground + 17 sensible + 80 latent + 1st 63 LWIR (by remaining difference) and balance is closed!!!!

(1st 63 LWIR is MIA??? Where did it go?? Did TFK palm it like a magic act??)

Where does this extra 396 upwelling come from??

It is the theoretical LWIR from a S-B BB calculation at 16 C, 289 K, that fills the denominator of the emissivity ratio, i.e. 63/396=0.16.

It is not real, it is “extra”, it violates LoT 1.

The 396 upwelling “measurement”/333 “back” cold to warm/a 2nd 63 LWIR GHE loop violates LoT 1 & 2.

Remove the 396/333/63 GHE loop from the graphic and the balance still holds.

Those who claim to measure 400 +/- W/m^2 upwelling from the surface are applying an incorrect emissivity.

This graphic and all of its clones are trash.

The kinetic heat transfer processes of the contiguous atmospheric molecules render a surface BB impossible.

Energy leaving any thermal system = Conduction + Convection + Advection (wind) + Latent (water condensation and evaporation) + Radiation = 100 %

63/(17+80+63) = 0.16

A BB only exists in a vacuum as I demonstrate by experiment.

There is no GHE and no CO2 driven CAGW.

Expand full comment