So, I take it that a closed form climate change model where one can plug in the current values of known parameters is not yet possible and maybe is inherently not ever possible? Is that another way of saying the climate system is chaotic? Or is it a case where orders of magnitude faster computers are needed?
- parameters you don't know, e.g., characteristics of aerosol particles that vary a lot in different locations
- "sub-grid parameterization" which is a big subject and the main problem for current grid sizes. Essentially the GCM is 100km x 100km (at best) but many processes take place at much smaller resolutions. So you have to average them out. But you can't average out complex non-linear processes. You have to have an educated guess at the right value of a parameter instead.
Orders of magnitude faster computers are the only way to attack sub-grid parameterizations. However, let's say we want to get down to 1km x 1km. Because you also need to increase your vertical resolution and decrease your time step you might need a million times the current processing power.
You'll definitely have more fidelity in representing your processes and maybe this will be a big leap forward in better climate models. But..
1. You still need sub-grid paramaterizations (aka closure schemes) for processes taking place below 1km x 1km.
2. Now you might find that the results of your climate model are highly dependent on all the parameters you don't know that you need to feed in at a much higher spatial resolution, e.g. characteristics of aerosols in the atmosphere
People who've spent a long time looking at this will have much better ideas, and probably disagree with each other. As David notes in his comment, Palmer & Stevens are pushing for this km scale resolution. Instead of 20+ climate modeling centers each doing their thing, have them all pool resources. I'm all for that idea.
But I have no idea if climate modeling will be amazing if 1Mx processing power gets thrown at the problem.
I think its unlikely that even 1Km resolution will really solve the problems. Aeronautical CFD already uses this kind of resolution and it merely surfaces the problems with nonlinearity and chaos. There seems remain some unresolved sensitivity to grid density. I personally think there is something to be gained by working on better subgrid models and solving them all simultaneously. Right now they are applied sequentially, a very big problem.
BTW, chaos is a really big problem for any eddy resolving time dependent calculation. The issue is the nature of the attractor. The key issue is how attractive it is. Another issue is bifurcations and saddle points. We simply don't know enough here to make well founded statements about whether high resolution will help or hurt.
IMO, climate modeling will be better if !Mx processing power gets thrown at the problem. I don't think "amazing" or "not amazing" is a useful way to characterize any improvement. I think Box's saying will apply: "All models are wrong, but some models are useful". How much improvement is needed to be useful? At a minimum, one needs to get feedbacks right, especially cloud feedbacks, to be useful for projecting climate change.
Smaller grid cells are certain to improve some phenomena. No model showed a QBO (a shift in the direction of winds in the stratosphere and back every 28 months), but when grid cells in the stratosphere were made thinner vertically, many climate models can simulate a realistic QBO (apparently driven by gravity waves).
Cliff Mass has been doing short (1 month?) simulations with what he calls convection-permitting models (CPM) with grid cells 3 km horizontally with time steps of 18 seconds, in which convection no longer needs to be parameterized. That means no entrainment parameter, the parameter when changed realistically produces the biggest change in climate sensitivity. Getting rid of the entrainment parameter should be a major step forward. Mass has been making runs initialized with present conditions that are for about three months. Unlike conventional AOGCMs, these models show a realistic Madden-Julian Oscillation traveling eastward in the tropic. They also no longer produce too much drizzle and exhibit diurnal cycles of precipitation that more closely resemble what is observed (peak precipitation over land in the later afternoon and over water just before dawn).
Higher-resolution climate models therefore will certainly be able to better represent phenomena better than today's models. I don't know how we can know if they are "useful" for predicting climate change and climate sensitivity.
Tuning in climate models involves calibrating the inertial forcing against known measurements -- for the case of fluid dynamics models such as ENSO, the calibration is against the delta length of day (LOD) measurements. Can do the same for QBO, but since QBO is a zonal wave-number 0 behavior, most of the forcing factors can be ignored, keeping only the invariant factors.
Yes, CFD experts have known from the time climate modeling was an idea in James Hansen's convoluted mind that the results would have very large numerical and subgrid model errors. In reality the results will only be skillful on those outputs used in tuning or closely related to them. Recently, a few modelers have come clean including Palmer and Stevens who are proposing massive increases in resolution.
So, I take it that a closed form climate change model where one can plug in the current values of known parameters is not yet possible and maybe is inherently not ever possible? Is that another way of saying the climate system is chaotic? Or is it a case where orders of magnitude faster computers are needed?
John,
It's not to do with chaos.
There are two issues:
- parameters you don't know, e.g., characteristics of aerosol particles that vary a lot in different locations
- "sub-grid parameterization" which is a big subject and the main problem for current grid sizes. Essentially the GCM is 100km x 100km (at best) but many processes take place at much smaller resolutions. So you have to average them out. But you can't average out complex non-linear processes. You have to have an educated guess at the right value of a parameter instead.
Orders of magnitude faster computers are the only way to attack sub-grid parameterizations. However, let's say we want to get down to 1km x 1km. Because you also need to increase your vertical resolution and decrease your time step you might need a million times the current processing power.
You'll definitely have more fidelity in representing your processes and maybe this will be a big leap forward in better climate models. But..
1. You still need sub-grid paramaterizations (aka closure schemes) for processes taking place below 1km x 1km.
2. Now you might find that the results of your climate model are highly dependent on all the parameters you don't know that you need to feed in at a much higher spatial resolution, e.g. characteristics of aerosols in the atmosphere
People who've spent a long time looking at this will have much better ideas, and probably disagree with each other. As David notes in his comment, Palmer & Stevens are pushing for this km scale resolution. Instead of 20+ climate modeling centers each doing their thing, have them all pool resources. I'm all for that idea.
But I have no idea if climate modeling will be amazing if 1Mx processing power gets thrown at the problem.
Thank You! for the clarification.
I think its unlikely that even 1Km resolution will really solve the problems. Aeronautical CFD already uses this kind of resolution and it merely surfaces the problems with nonlinearity and chaos. There seems remain some unresolved sensitivity to grid density. I personally think there is something to be gained by working on better subgrid models and solving them all simultaneously. Right now they are applied sequentially, a very big problem.
BTW, chaos is a really big problem for any eddy resolving time dependent calculation. The issue is the nature of the attractor. The key issue is how attractive it is. Another issue is bifurcations and saddle points. We simply don't know enough here to make well founded statements about whether high resolution will help or hurt.
IMO, climate modeling will be better if !Mx processing power gets thrown at the problem. I don't think "amazing" or "not amazing" is a useful way to characterize any improvement. I think Box's saying will apply: "All models are wrong, but some models are useful". How much improvement is needed to be useful? At a minimum, one needs to get feedbacks right, especially cloud feedbacks, to be useful for projecting climate change.
Smaller grid cells are certain to improve some phenomena. No model showed a QBO (a shift in the direction of winds in the stratosphere and back every 28 months), but when grid cells in the stratosphere were made thinner vertically, many climate models can simulate a realistic QBO (apparently driven by gravity waves).
Cliff Mass has been doing short (1 month?) simulations with what he calls convection-permitting models (CPM) with grid cells 3 km horizontally with time steps of 18 seconds, in which convection no longer needs to be parameterized. That means no entrainment parameter, the parameter when changed realistically produces the biggest change in climate sensitivity. Getting rid of the entrainment parameter should be a major step forward. Mass has been making runs initialized with present conditions that are for about three months. Unlike conventional AOGCMs, these models show a realistic Madden-Julian Oscillation traveling eastward in the tropic. They also no longer produce too much drizzle and exhibit diurnal cycles of precipitation that more closely resemble what is observed (peak precipitation over land in the later afternoon and over water just before dawn).
https://journals.ametsoc.org/view/journals/bams/100/6/bams-d-18-0210.1.xml?tab_body=fulltext-display
Higher-resolution climate models therefore will certainly be able to better represent phenomena better than today's models. I don't know how we can know if they are "useful" for predicting climate change and climate sensitivity.
Tuning in climate models involves calibrating the inertial forcing against known measurements -- for the case of fluid dynamics models such as ENSO, the calibration is against the delta length of day (LOD) measurements. Can do the same for QBO, but since QBO is a zonal wave-number 0 behavior, most of the forcing factors can be ignored, keeping only the invariant factors.
Yes, CFD experts have known from the time climate modeling was an idea in James Hansen's convoluted mind that the results would have very large numerical and subgrid model errors. In reality the results will only be skillful on those outputs used in tuning or closely related to them. Recently, a few modelers have come clean including Palmer and Stevens who are proposing massive increases in resolution.