Unfortunately, this is a “42” kind of article as in the answer to everything. Yes, I have the answer – but as yet it won’t mean much to anyone until I’ve found the right way to introduce it. But it is definitely the answer. A real Eureka moment – so I feel it is worth the post even if I’m going to be the only one celebrating.
For a while I have been trying to find the noise model used in climate simulations. I knew there was a problem , but all attempts to find such a model had failed until tonight. Then I read The ESSENCE project – signal to noise ratio in climate projections
Different ensemble members are generated by disturbing the initial state of the atmosphere. Gaussian noise with an amplitude of 0.1K is added to the initial temperature field. … The basic ensemble consists of 17 runs drive by a time varying forcing.
Eureka!!!
I now know exactly why all these climate academics think they understand the climate and don’t. From a purely physics perspective this is the end of the global warming scare. Obviously, there’s more to the scare than just physics, but physics is the foundation and I can now show where that foundation is wrong.
What it means is I know why natural variation has been mistaken for a man-made signal and I can prove it. It explains why e.g. in the 2011 paper “Separating signal and noise in atmospheric temperature changes: The importance of timescale” Santer et al. found that:
the decrease in noise amplitude with increasing trend length, so that any errors in model signal trends are less obscured by noise on longer time-scales.
This is not true. In fact the variance increases approximately proportional to the log of the time-scale. Unfortunately, just stating this won’t cut ice. Instead I had to show how their methods produced the wrong variance. I can now do that.
Till tonight, I’ve only mean able to say they were wrong without really knowing why. Now I am able to understand how they could come to such a counter intuitive conclusion as to suggest natural variation reduces over longer periods. Now I understand why the variance in their models reduces as time-scales increase. This allows me to explain some very peculiar things:
- the belief that longer time-scale projections are more accurate – which is akin to saying it is easier to predict the weather next week than this.
- the belief that even though they cannot predict the climate over 1 year, they can over longer periods. And indeed, the idea that even though they couldn’t predict the climate over the last 15+ years they can predict is over the next century.
- the reason why the patently massive noise to signal ratio (which makes it impossible to see any signal) is mistaken for a massive signal to noise ratio.
Now there is only the small issue of explaining this. That will take time.
But in the flicker of noise there is the answer.
> the noise model used in climate simulations
That’s a very odd thing to say, because there isn’t one. In the sense of a noise model used *in* the simulations. There may, however, be one used in interpreting the simulations.
> Different ensemble members are generated by disturbing the initial state of the atmosphere
This is referring to a technique for generating ensemble members. All it does as add a very small perturbation to the initial state. This state is integrated forwards, but the noise is only added at the beginning. Its unexciting and uncontroversial. You can’t possibly have learnt anything interesting from this.
> The basic ensemble consists of 17 runs drive by a time varying forcing
That’s 2007; of coruse a 17 member ensemble would be small nowadays. The forcing is “greenhouse gases (GHG) and tropospheric sulfate aerosols are specied from observations, while for the future part (2001-2100) they follow the SRES A1b scenario”. That’s not noise, of course.
> saying it is easier to predict the weather next week than this
No-one claims this. But I tell you that its easier to predict the 3-month JJA temperature next summer than the temperature of a given day next month. You simply have to look at the statistics – the variance of a single day is far higher. And this is basic statistics, there is (almost) no physics in there.
I don’t really understand what you’re getting at here. As William says above, the initial perturbations are simply to produce slightly different initial conditions. If you didn’t do this, they’d all be the same. In the absence of any changes to the forcings, areas where the perturbations push the temperatures above equilibrium would cool (on long enough timescales) and areas where the perturbation push the temperatures below equilibrium would heat up. So you seem to be suggesting that this initial noise somehow grows with time and hence allows a natural signal to appear anthropogenic. I can’t see how this is possible. Without some kind of external driver, this noise should disappear (I think) and the system should settle into an equilibrium determined largely by the external drivers. I’m not convinced that you really have done what you think you have.
> So you seem to be suggesting that this initial noise somehow grows with time
Well it does – but only in the sense that *any* initial perturbation no matter how small grows, when compared to the “unperturbed” state. In that sense, their “an amplitude of 0.1K” is vastly in excess of what is needed – you could use 0.00001 K, and you’d get scientifically identical results, though the runs would differ.
> and hence allows a natural signal to appear anthropogenic.
But in this you’re correct. The “weather noise” amplifies until it saturates; but there is no “climate noise” in there.
This is rather similar to the discussion that grew up out of http://scienceblogs.com/stoat/2013/07/27/oh-dear-oh-dear-oh-dear-chaos-weather-and-climate-confuses-denialists/
[BTW: I’m not going to comment over at Wotts about the CO2 and stuff, because you’re already getting beaten up quite enough over there. I will say though (a) you’re wrong to say that we can’t be certain the CO2 rise in anthro. You’re also wrong to say that the info for that isn’t readily available; all you have to do is look. You’re also wrong about the pre-1958 stuff. But also (b) isn’t it interesting how many people want to talk? Blogs are more interesting when people tlak.]
William, I suspect we’re saying the same things in different ways (in fact, in reading my first comment, I said it badly). If I take a standard grid-based code that is completely noiseless then it does nothing unless forced in some way. So, if I understand you correctly, what you’re saying is that you need the initial perturbations so as to produce some kind of initial structure that can then evolve into the various weather patterns resulting from those initial conditions. What I was referring to was the comment in the post that went
From what you’re saying the noise does grow to produce the weather, but it can’t grow to produce some kind of long-term climate signal. I think that ScottishSceptic is inferring that the growth in the “noise” can then appear to produce a climate signal which, as far as I’m aware (and if I understand your point), is not correct.
Aaaaaaaahhhhhhh I think you have provided the key. OK, now I do at least understand what our host is saying in this post, and with things like “In fact the variance increases approximately proportional to the log of the time-scale.”
But, he is completely wrong. There are some very nice (if I say so myself; I drew them) pix of perturbation growth over time at:
http://mustelid.blogspot.co.uk/2005/10/butterflies-notes-for-post.html
(and the *pattern* of perturbation growth is also fascinating, if you know your met.). But the important point to note there is that the perturbation growth saturates after 30 days, ish. At that point, information about the initial state is effectively lost – you could have started from any (atmospheric) state.
So ScottishSceptic: the key point here is to distinguish weather “noise” from climate “signal”. You can’t predict weather over any period longer than say 30 days – even the tiniest perturbation leads to divergence. You can, however, predict model climate as far into the future as you care to (in my runs, which are atmosphere-only with forced SSTs: in those, the climate is stable, and any 30 year period from a run will given you essentially the same statistics. If you tried the same thing with a coupled ocean-atmosphere model then you’d get some longer-term variation from the ocean; but if you did it in a model with a stable ocean (like HadCM3) you’d get *essentailly* the same climate out of any 30-year period of a 1000 year run.
That’s for runs where the forcing is kept fixed. For the runs you’ll be more familiar with, if you take HadCM3 and ramp up the forcing at 1% CO2/year, or somesuch, and you do this in an ensemble of perturbed runs, then again you’ll get the same climate from all the model runs, providing you pick the same future timespan to calculate your climate for.
I hear what you are saying. The post is very helpful as it confirms what I found on the handling of variance in the climate models. However understanding a problem doesn’t mean I can convince other people. To convey information, one needs a shared conceptual vocabulary of ideas, otherwise we will just talk across each other at cross purposes like a Chinese plumber and an African witchdoctor.
I need to learn the African for a monkey wrench.