A while back I tried to find out what model of natural variation was included in the climate models. After searching and finding no formal definition of their model of “natural variation” I finally worked out that “variation” in the world of climate modelling means one of two things
- Instrumentation noise – which they assume can be eliminated by averaging (false for long term 1/f type noise)
- “ensemble forecasting”. What this means is that the initial conditions are slightly perturbed and then these new initial conditions are fed into the model to produce a new result. And e.g. if 70% of the predictions indicate rain, then this is taken as meaning that there is a 70% chance of rain.
The first is assumed to be external to the climate – derived only by imperfect measurement. The second is the only form of randomness I could see included in the climate models. However, this model of initial variability for the models is fundamentally wrong and it gets worse with longer and longer forecasts. The technical reason is because the “noise energy” as I like to refer to it or perhaps others might prefer “degrees of freedom” has a finite life within the model and eventually the main factor influencing the outcome is the quirks of the model.
I’ve struggled to find a way to explain this simply, but today I hit on a simple analogy which is that of cards.
Imagine a simple game of cards, like whist, bridge, etc.
Now imagine you start playing with a hand that has been thoroughly shuffled. Now imagine that each time you pick up the cards, you do so in a systematic way with no shuffling.
As anyone who has played cards will know, cards that are not shuffled tend to introduce patterns. Those patterns tend to get easier and easier to spot the less the cards are shuffled. So, whilst the initial hand would be played by random cards, very soon the pattern of cards will mostly reflect how the game has been played. But unlike ordinary card games where humans naturally are entirely systematic in how they pick up cards, a computer model is thoroughly systematic … indeed the great benefit of computers is how systematic they are … unless someone explicitly tells them to introduce randomness.
That is not to say, the initial order of the cards did not affect the game, but it does mean that sooner or later a very strong pattern will develop in the cards and from that point onward, there is little real randomness in the order of the cards.
Likewise, in a climate model, any variation introduced at the start will tend to quickly be moulded by the model into a pattern that is largely a characteristic of the model. And once any simulation it is “stuck in that rut”, it will always be stuck in that rut and so its evolution is entirely predictable. So, the degrees of freedom of the model tend to reduce over time as “ruts merge”, but in a real climate, it not only has far more degrees of freedom in the first place, but external and internal perturbations are constantly adding to the degrees of freedom to renew the randomness … much like shuffling cards.
That is why climate models fail and fail all the worse, the longer the forecast … it is because they have no concept of “natural variation”. Instead, their only method of modelling variation is to introduce it in the original “shuffle” and (falsely) believe that’s enough.
In electronics there is a concept of a “noise source”. This is something that constantly adds variation to the system. Unlike climate models whose variation is determined at the start, a noise source allows variation to increase with time. So, like real systems, the effect of the variation can increase. But the main difference is that electronic noise models need to have a frequency profile: how much perturbation is added at any particular frequency or … what scale of change occurs over any period. In contrast, in ensemble forecasting the frequency response to initial conditions is not part of the noise model … it isn’t part of the randomness put into the model, but is instead an artefact of the model itself.
To go back to the card analogy, let us imagine a card game where the winner of the last hand got an extra card. Now, given the repeated pattern in the cards, their own extra card and the predictability of the game, it is likely the game will get boringly repetitive after time if the cards are not shuffled. Because the methodology tends to favour certain patterns, those patterns will tend to dominate and when they do, there is nothing to change the system to any new pattern.
But now imagine, the cards are shuffled. Now, the winner of the last hand, still has their advantage, the game is still biased toward certain patterns (winner will be better placed to be the winner next time), but due to the added randomness, the dominant pattern can and will be overturned sooner or later by sheer chance.
Now, although the winner is more likely to be the winner of the next round, and so the round after. So, the game now has far more longer-term predictability, it is not “stuck in a rut” like the climate models – which because there is no added variability, once they are stuck in a rut … they stay in that rut.
I have had a question along the same lines regarding climate models. Look at this graphic: http://www.epa.gov/climatechange/images/science/ModeledWithWithout-large.jpg The caption is this:
“Models that account only for the effects of natural processes are not able to explain the warming over the past century. Models that also account for the greenhouse gases emitted by humans are able to explain this warming.Source: USGRCP 2009”
My question is: how do they know what the natural forcings would have been without any added CO2 (or what they call “human effects”)? Did they simply run the model and then subtract what they believe the human component is and declare that the natural forcings? Would that not be reverse logic?
I usually run into silence when I ask this. The most common answer is that we would be in a natural long term gentle cooling trend based on temperature reconstructions of the past 2000 years. That’s not very satisfying. Perhaps that’s one of the reasons they needed to get rid of the MWP.
What they appear to do is this:
1. think of all the nasty things man kind has put into atmosphere]
2. Fabricate statistics to put into climate models
3. Make a model that supposedly has the “nasty things” affecting the climate and causing all the “nasty” changes …. which is probably a simple set of scaling factors, time delays and anything else they fancy putting into the witches brew.
4. Declare they “can model the climate” and that “it proves the harmful effects of humanity”.
5. Wait for new data and if it:
a) doesn’t agree … pretend it never happened … hoping the sceptics never read the forecast
b) if there is even mild agreement … shout it from the rooftops that the sceptics never listen and they intentionally dissed their paper. … and hope the next data doesn’t go the sceptics way.
So let’s use a simple example. CO2 is rising by 1 each year. And the amount of hot air from hair driers was a constant 1 each year except the last when it was 2.
The temperature in the three years on record are 2, 4, 5. So, the scaling factor for CO2 = 2. This means the temperature should go up 2,4,6. Which means the scaling factor for hair driers is -1 (plus offset).
Thus based on their (moronic) modelling they can say “without hairdriers the temperature would have been 1 higher”.
Notice the way there is absolutely no physical relationship nor even any testing or any reason at all why hairdriers should affect climate – but they can quite righly say “their model says without hairdriers it will be warmer”.
I don’t know if you have ever seen this interview but it’s a pretty candid one from a mainstream climate scientist, Hans Von Storch. He even hints a bit at what I was asking about how we know what natural variation has been occurring in tandem with human effects:
“Of course, that evidence presupposed that we had correctly assessed the amount of natural climate fluctuation. Now that we have a new development, we may need to make adjustments.” http://www.spiegel.de/international/world/interview-hans-von-storch-on-problems-with-climate-change-models-a-906721.html